
So, I get uncomfortable watching actors, as their character, sing onscreen.
I think it’s like the whole musical theater thing of, “When you’re too emotional to talk, you sing, and when you’re too emotional to sing, you dance.”* Level 2 emotion is a LOT. While there are actors who are exceptional at Level 1, Level 2 is a different story. So when they’re sitting there, strumming an acoustic guitar, Level 2ing their character’s feelings to a smoky bar, it feels fake to me — it’s obvious they’re trying to make us believe they feel that way when they actually don’t.
LLMs — as much as ChatGPT Spotting has become a hobby — have gotten exceptional at Level 1ing. There’s AI-generated content that gets Human Talk right on. But Level 2 is deeply personal. It’s so emotional you can’t even speak. That’s hard for an actor to fake, and they’re actual people who have emotions and vulnerabilities. An LLM can only pretend.
This all came to mind with the recent pandemonium when OpenAI upgraded ChatGPT-4 to -5 and users melted down, because their friend/therapist/lover had suddenly gone cold. Because they’d been investing emotion and vulnerability into an algorithm that’s pretty good at pretending.
So yeah, there are things AI is good for, and things it so supremely isn’t. I, myself, use it for some things and absolutely not for others. So here are a few things I’ve used it for just in the past week, and things I wouldn’t — and you shouldn’t — do.
TWO CRUCIAL CAVEATS
Before we dig in:
1. The environmental impact of AI use is RIDICULOUS. Just the consumption of energy and water per prompt is out of scale. Every time you prompt ChatGPT, imagine you’re personally strangling a bird. And not an ugly bird, either — one of these. Imagine you’re strangling a tufted titmouse.
Maybe you don’t need to thank your LLM, is my point.
2. Before you upload any delicate information to your LLM, anonymize the fuuuuuck out of it. COMPLETELY. That AI is 99.999% for certain using your data for future training, so if your client wouldn’t be cool saying, “Yeah, you can donate that data to OpenAI,” anonymize the hell out of it. Be twice as paranoid as you think you should be.
With all that in mind:
ChatGPT Woulds, Wouldn’ts, Haves, and Havens
Would and have: “Proofread this.”
Please proofread this copy and provide a bulleted list of typos and such at the end.
(Again, with “this copy” anonymized to hell and back.)
Any writer will tell you that if you work your copy enough, your eyes will start reading whatever you meant to write and not what you put on the page. If you don’t have another pair of human eyes to read it over, ChatGPT does a great job.
Haven’t and wouldn’t: “So, I [VERY PERSONAL THING]…”
I’ve been having problems with [VERY PERSONAL THING]. What should I do about it?
I would call my therapist. Because ChatGPT has received neither a Ph.D. nor an M.D., even an honorary one, and any advice it might give would be cobbled together from the combined wisdom of the whole entire internet, which knows nothing about my psychological history and also, to my knowledge, has received neither a Ph.D. nor an M.D. Even an honorary one.
Would and have: “Compare this copy.”

(Yet again, and I will not stop saying this, anonymized.)
I wrote this product copy. I also have this existing copy for a similar product from the same brand. Obviously, many of the product details are going to be the same, and the brand info is going to be the same, but I need you to tell me if overall, it’s so similar that consumers might notice and be put off by it.
“They’re similar, but are they the right amount of similar?” is tough for my human judgement, but ChatGPT’s analysis can handle it. In this case, it came back with, “You’re good in general, except for this one line. Would you like suggestions?” And I was like, yeah, give me ten suggestions, and it did, and they were all bad, but they did inspire me to write a line that wasn’t bad, so it helped anyway.
Haven’t and wouldn’t: “Did you hear about this thing in the news?”
Did you hear that a troop carrier hit a car in D.C.? That is wild and not in any way surprising.
No, I hit up The Boy on Discord, because ChatGPT did not “hear” anything, on account of it’s a robot, and can’t participate meaningfully on the other end of that conversation, because robot, and pretending would feel weird.
Would and have: “Give me a list of blog topics.”
Please look at the posts on [A BLOG] and provide a list of 30 topics that would go along with the rest of the content there, focusing on the area of [AN AREA].
Why 30? Because 25 of them aren’t going to be any good. But five of them will be, and I can massage them into usable content. Sometimes the topic well runs dry. An AI-generated boost is fine, as long as you go behind it and make adjustments to make things authentic to the brand.
Haven’t and wouldn’t: “Write me a blog post.”
Please write me a 500-700 word blog post on [A TOPIC]. It needs to satisfy [PARAMETERS] and match the tone of [A BLOG]. I’ll provide some bullet points and brand information you need to include.
Well, I haven’t never, but never to get something turnkey. It’s always been copy ranging from okay to meh, copy that didn’t meet specs, overall brand-generic tone, several links to articles that didn’t say what ChatGPT just said they said, and I’m sorry, yeah, twelve tons of em dashes. And it just sounded… robotic. Or maybe like a human person who got the same prompt but had never worked with the brand before. It was, for the most part, workable. Just not… good.
(And yes, if I’d dedicated some time to massaging my prompt and/or re-prompting to get it where it needed to be, I’m sure I could have gotten good stuff, but in that time, I also could have written the post myself.)
Would and have: “What’s another word for…?”
What’s a single word that implies a mass freakout? Like, a large number of people emotionally collapsing en masse over something not worth collapsing over?
I live with a Tolstoy’s worth of words on the tip of my tongue, and it is AWFUL, and having ChatGPT to be my “like this word, but different” buddy is life-changing. (Guess which word we came up with.)
Haven’t and wouldn’t: “No one else understands me.”

No one gets me like you do. I can talk with you for hours, and you always know the right thing to say. I can [NSFW]. I feel like my spouse just doesn’t understand me.
I… I just… Buttercup. ChatGPT doesn’t “understand” you beyond its programming to produce really on-the-nose language in response to your prompts. Would you be uncomfortable if you got to the third date and found out your dream date had been stalking you for months and carefully crafting every word and action based on the personal information they’d amassed? ChatGPT the Considerate Lover is a whole field of red flags.
tl;dr: Digital work partner yes, digital life partner hoooo lordy no
I’ve been so dedicated to reminding the world ChatGPT isn’t a search engine that I haven’t thought to remind them it also isn’t a therapist, bestie, or lover.
Or personal analogue. Just for poops and grins, I did strangle a titmouse to have ChatGPT write a version of this blog post, and… it was actually pretty good. It even included an anecdote about a typo it had caught for me in the past, which felt kind of creepy but cool. And it did a pretty good job of matching my tone, and my tendency for self-deprecation (which, if you think about, is in the case ChatGPT deprecating me, so maybe I should be insulted?).
But it still wasn’t me. It was like another writer — human or robot — trying to write like me, and doing a pretty good job, but being just enough off that it called even more attention to the offness. Because an LLM is never going to really nail Level 2. When we need the really good stuff — when we need it strumming its guitar softly as it Level 2s out over the ocean — it’s still going to be just enough off. That human element will still need a human person — the kind that feels emotion, and empathy, and vulnerability, and doesn’t disappear in an instant with one poorly considered upgrade.
*I don’t know what happens when you’re too emotional to dance. I assume there’s alcohol involved.