
First off, this is y’all’s fault.
I didn’t even want to keep talking about ChatGPT. I wrote one satirical LinkedIn post making fun of all the “how to clock ChatGPT” posts that villainize em dashes, and folks were all “hell yeah, em dashes suck!” and so I wrote another LinkedIn post about how em dashes are actually fine, and folks were all “who cares about em dashes, ChatGPT is cheating,” and first of all, clearly you care about em dashes, but also… y’all.
“Is ChatGPT cheating” is the most annoying cousin of “how to tell if it’s ChatGPT” and older brother to the quieter but equally ubiquitous “does it matter if it’s ChatGPT.” So hopefully, with this post, I will have taken out the entire family and can return to my previous posting topics about advertising and creativity and whatever that nobody reads.
Hold onto your butt.
1. Is using ChatGPT cheating?
“Cheating”? What is cheating at writing?
I mean, if you’re working on a project that has rules, and one of those rules is “no ChatGPT,” and you use ChatGPT, you broke the rules, but that’s not necessarily cheating. In terms of, like, gaining an unfair competitive advantage… over what? Who are you competing with? What is the competition?
I know some do see it as a kind of competition — that there’s a hierarchy of Real Writerness, and using AI besmirches the purity of Real Writing and, I don’t know, devalues the work of Real Writers, and… y’all. AI is a tool. And while the implications of using that tool can be debated, “AI lets fake writers fake being real writers” is just silly. And it gatekeeps the act of writing from people who struggle with it and now have access to a great form of expression with the help of said tool, which we need to be encouraging and not sneering at.
The question doesn’t make sense, is my point.
2. What, specifically, is “AI-generated content”?
I’ve never sat in on a discussion of AI-generated content that started with a definition of what “AI-generated content” actually is. That’s a lengthy and far less interesting discussion but one that needs to happen, because without coming to a consensus about the term, any further debate is kind of pointless.
When we say, “AI-generated content,” do we mean a person barfed a prompt into ChatGPT’s prompt line and posted the resulting copy? Or that they wrote a piece, refined it with ChatGPT, did another final pass themselves, and then posted it? Somewhere in between? Is there some specific percentage of AI involvement, a bright line beyond which the work is unacceptably robotic but before which is A-OK?
We spend so much energy criticizing “AI-generated content,” and trash talking people who participate in it, without even defining what it is, and have lengthy debates about it without even knowing if we’re all debating the same subject. Y’all.
For our current purposes, we’ll say “AI-generated content” means it was written wholly in response to human prompts, with little to no writing beforehand or editing afterward. (See how easy that was?)
3. Why isn’t “it’s just bad” good enough?
The annoying “how to clock ChatGPT” always mention the same sure-fire tells (repetitive format, no personality, and stilted language, f’rinstance). And I guess those things are common to unedited AI-assisted writing, but aren’t they also, like, just bad writing? Isn’t any writing that’s repetitive, stilted, and void of personality bad writing regardless of who wrote it?
And that brings us to the question of what “just bad” even means. I happen to think stereotypical LinkedIn posts are pretty bad. The choppy, single-sentence lines have no flow and make my brain jump at every line break — they literally give me a headache — and those “personal” anecdotes frequently come off as contrived and insincere. Does that add value? Is a banal, hard-to-read mini-essay ending in a trite “insight” somehow better than AI-generated content, just because there was a person on the business end of it? Or is bad writing just… bad writing? And if a post does include em dashes and words like “delve” and “vibrant” but is still well-written and interesting, are you going to just disregard it because it might be AI? Y’all.
And for the love of David Ogilvy, stop putting your creatives’ work through AI checkers. If you can’t tell by reading it that it’s hinky, assume it’s not hinky. All the checker is going to do is make you question your own judgment and probably catch some poor writer who keeps getting accused of being a robot and doesn’t know why.
4. Are rocketship emojis really a ChatGPT thing?
I legit hadn’t noticed. I used it in my post because I thought it would be something to up the silly factor. I had no idea I’d actually just be opening deep wounds.
5. Does it matter if content was written by ChatGPT?
No, except to the extent that yes.
It definitely, 100-percent doesn’t matter to the extent that ten bazillion LinkedIn posts think it does. Y’all. Make a list of five things you could be using that time and energy for.
But beyond “ChatGPT is cheating” and “ChatGPT is low quality,” there are elements I don’t think I’ve ever seen discussed (think of it — y’all could have had some original material this whole time) that are finally going to get their moment in the sun.
No, kind of.

The question is often asked, “If the post provides value, does it matter if it was written by an AI?” We can call back to Q3: If the post is banal or trite but people are willing to read it anyway, is it worthless? People accept banal, trite content all the time, and if it’s at least informative or entertaining, there’s something to it — I mean, Madame Web streamed 10.8 million times in a week.
However.
We also have to accept that ChatGPT is wrong, like, all the time. It hallucinates. A ChatGPT-generated post is only as good as the reader’s ability to fact-check it. And thus any conclusions drawn on those hallucinated facts are also suspect, even though they might seem perfectly legit. AI-generated content is inherently unreliable to some degree and should be taken with many grains of salt.
And that’s why users need to disclose when content is AI-generated. Like the platforms that require you to tag AI-generated images, AI-generated written content needs a tag. Because the audience needs to know if the facts of the piece aren’t necessarily correct and need double-checking. They need to know if the conclusions drawn are based on statistical probability and not logic. If we’re going to say, “It’s fine as long as it’s providing value,” being able to confirm it’s providing value is essential to declaring it fine, and “it’s AI, I can always tell” isn’t the answer to that.
Except when yes.
It matters when insight matters.
It does make a difference when a human writes something. There are things only a human can bring to writing — experiences, perspectives, unique brain structures and chemistry. Insight is the product of all that, and LLMs, not having any of those, can’t do it. LLMs are an algorithm programmed to predict, statistically, what word would be the best word to go next to another word. And while they can come up with copy that sounds pretty compelling, like it might have come from a human being with wisdom born of experience, it’s always going to be stuff assembled like a puzzle from other stuff — ChatGPT is never going to have anything new to say, because it can’t.
One of the reasons we place such importance on thought leadership is that it is, ostensibly, the irreplicable thoughts of the only person who can have them. Even a quote-unquote “thought piece” generated by an AI trained on the individual’s past, human-generated writing isn’t providing insight — that’s an exclusively human thing. Thought leadership has to come from a human brain, accept no substitutes, anything else is only sparkling calculation and must be labeled as such.
So if what is being presented as human insight was foundationally written by ChatGPT, that’s misleading. You’re handing me content you said was produced by the unique brain of someone specific, and it wasn’t. You need to be up front about the fact that a piece of content was mostly written by AI, giving me the opportunity to vibe-check the insights the same way I’d fact-check the information.
Just keep your standards up, and stay human.
It does matter if it’s ChatGPT, but not because ChatGPT is cheating or because it’s bad writing — it matters because it can be wrong, in fact and in conclusions, and people deserve to know if they’re about to read something that may or may not be wrong. And it matters in that human insight is something ChatGPT is incapable of replicating, and so if something is being presented as human insight, it has to have come from a human.
The answer to ChatGPT isn’t to villainize it — it’s to maintain high standards. If we insist on well-written, non-banal, non-trite posts, no matter how much AI is involved in producing them, that’s what we’ll get. And the answer is to require that AI involvement be disclosed up front, so we know whose insights we’re getting and can take the post with as many grains of salt as are warranted. It’s not about ChatGPT. The whole ChatGPT debate has never been about ChatGPT — it’s about the people using it, and the people reading the content generated with it.
If you want to see work that’s human, stop being so fixated on robots and just hold the work to human standards.
That’s it. That’s the post.
🚀
Fascinating. Lots of fun, lots of fun. I’m thinking that this was largely written by a human being and not a robot from the south that uses “y’all” just to be conversational.
Glad you liked it! And yeah, I still haven’t been able to train an AI to understand the nuances of the many potential uses of “y’all.” Don’t even get me started on “bless your heart.”
“Bless your heart” is so attractive to my highly critical mind that I have appropriated it, even though I’m in Colorado. (an insult disguised as a blessing!)