A blog about advertising, copywriting, creativity &c.
Creativity cage match: The human brain vs. ChatGPT

Creativity cage match: The human brain vs. ChatGPT

So.

In a previous post, I addressed the question of whether it matters if content was written by ChatGPT and came up with the bold, definitive answer of “sometimes no, sometimes yes.” (Count on me for the hottest of takes.) And central to that was the way the human brain functions — that thought leadership can’t be produced by an algorithm incapable of thought.

There’s a lot to that last part, right down to the fundamental structures and functions of LLMs and of the human brain that make us distinctly different. So sit with me for a minute as we talk about how LLMs work, how the brain works, and why it matters a lot.

LLMs (like ChatGPT)

A quote-unquote “neural network” inspired by a kind of outdated, simplistic concept of how neurons fire and connect in the human brain.

How it works

A dimly-lit image against a black background of the back of an android head, showing a clear dome full of electronic and mechanical parts.
Fig. 1: The competition.

ChatGPT, and all its large-language-model cousins (Gemini, Pilot, all ‘em), are a math problem.

I mean, they’re all very, very, fancy, advanced math problems, but still. An LLM predicts, based on patterns and relationships derived from a training set comprising tens of millions of documents scraped from the internet, what the next word should be in a sequence of words in response to a given prompt. That’s it.

I cannot overemphasize the extent to which ChatGPT is an advanced algorithm that’s really, really, really, really, really good at guessing what word should come next in a sentence.

So, by definition, an LLM can’t come up with any novel thought. It can’t think, and it can’t… novel. It can only assemble words based on thoughts actual people have had. (Well, mostly just people, but Habsburg AI is for another post.) There’s no uniqueness — only patterns. So if you want to use ChatGPT to produce new, creative ideas, you’re out of luck — LLMs can only give you remixes of what’s already there.

Hallucination

When we talk about ChatGPT hallucinating, it’s frequently in terms of “making stuff up,” which is both accurate and an understatement because the fact is, ChatGPT makes everything up. Right, wrong, deceptively plausible, it’s all made up, in the sense of being assembled using math. The words and it maths together are right far more often than they’re wrong, but it’s all just words mathed together.

Why did Gemini precursor Bard whiff on a question about the James Webb telescope? Why did AI priest Father Justin recommend baptizing babies with Gatorade? Because when Bard was sifting through hundreds of headlines about the JWT taking its (not the) first photos of an exoplanet, and when Papa J’s training set was full of sports stories wherein winning coaches are “baptized” with Gatorade, it was inevitable the math would pick up on wrong patterns and assemble wrong information. 

And that’s also a reason to be cautious about AI-powered searches. While search AIs are cool because they can draw from search engine results and not just a fixed training set, the last step is always mathing together the answer it will provide as a response to your question. So it can still be wrong. It can even provide links to pages that don’t actually say what the AI is saying they say. You still have to fact-check any AI search results just as you’d fact-check any content it wrote for you.

“Trust but verify,” except for the “trust” part.

The Human Brain

A for-real, actual neural network made up with billions of interconnected neurons generating actual, for-real thought.

How it works

A photo of a dark-haired woman with a white headband and white lab coat, against a white, vaguely lab-like background, holding up an anatomically correct model of a human brain with her mouth open like she’s going to eat it, because seriously, I don’t know.
I have no idea of why or what the hell, but it came up in a stock image search and if I had to see it, so do you.

Woof. If I could explain in any comprehensive yet comprehensible way how the brain works, I wouldn’t be selling hand-knitted baby blankets as a side hustle. But for current purposes, we’ll just look at the functions of storing data and generating ideas.

The brain creates memories by creating new connections between neurons — your brain literally reforms and remaps itself every time you experience a new thing. And with ‘roundabout 100 billion neurons in your brain, that makes for zillions of different, and ever-changing, connections. Those memories are stored in regions all over the brain, depending on the type of memory. It’s a big, complicated thing.

And the brain creates ideas by engaging all those regions, activating the parts that handle decision-making and problem-solving, emotions and motivations, storing and retrieving memories, even spatial orientation and visual processing, all at once. The act of being creative can actually help you become more creative — it re-wires your brain in a way that makes you see the world differently, so you can build even more connections that make you even more creative.

In short: Everything you experience, everything you learn, every memory you create, and every thought you have rewires and remaps your brain in a way that’s totally unique to you. Your mom always told you you’re special, and she wasn’t wrong.

Hallucination

Ayahuasca?

What It All Means

ChatGPT can produce informative, instructive, and even entertaining content. You’ll need to fact-check it, of course, because shockingly, the automated making-things-up machine is known for making things up. And you’ll want to do some editing to take the edge off any trite or robotic language or structures. But ChatGPT definitely has its place in content writing — if used properly and only for the things it’s designed to do.

But only the human brain can come up with novel ideas, insight, and creativity. This isn’t the dying plea of a creative as she watches the robots pillage her industry, it’s a physiological fact: That ever-growing, ever-changing network of billions of neurons packed into that three pounds of wrinkly squish between your ears is the only thing that can generate thoughts like yours.

Even an LLM trained on your writing can’t generate your insights, because all it’s doing is mathing your words together. It can write like you, but it can’t think like you. It doesn’t have access to all the ideas you weren’t even aware of having and memories you don’t even remember making but have nonetheless impacted the way you look at the world.

If an insight depends on personal experience, knowledge, instinct, and inspiration, only a brain can do that. If an idea needs to be unique and creative, only a brain can do that. If content is being presented as the ideas and insights and thought leadership of a particular person, only their brain can do that. ChatGPT can make you an outline, it can polish your grammar, it can even chat you through a bout of writer’s block, but it can’t think for you. It can’t think at all.

ChatGPT and human brains don’t have to compete. There’s no reason to compete, because they do different things. With continuous developments in LLM technology, it would be silly not to take advantage of capabilities that can make our job easier. But it would also be silly to disregard the necessity of for-real, human-brain insight at the heart of our message for truly unique, effective content — not just sometimes, but every time. At least until AIs become capable of independent thought, in which case we have a whole other problem entirely.

I, for one, welcome our new robot overlords.

Leave a Reply

Your email address will not be published. Required fields are marked *