AI: The accelerating costs of 'humanizing' AI. RTZ #809

AI: The accelerating costs of 'humanizing' AI. RTZ #809

The Bigger Picture, Sunday August 10, 2025

It’s a deep part of our scifi, especially around ‘humanoid robots’. Filled with amazing speculations on what it would be like if/when machines can be more ‘human’. Star Trek’s Data above was just a great iteration of that scientific fantasy. And fiction.

When the non-fiction history books are written a decade or so from now on this early AI Age we’re living through, it’s likely the main conclusion will be that society’s key mistake was dressing up computer information technology as ‘human-like’. And did it too early, and too fast.

With the launch of OpenAI’s GPT-5, their ‘latest and greatest’ in this AI Tech Wave, it’s important to revisit a topic that has been a quixotic quest for me on AI.

I’ve railed about the issue from the beginning of this daily AI writing journey.

We’re imbuing and describing ‘AI’ as human, and it’s not. We should not anthropomorphize it, to use a 50 cent word. And that is the Bigger Picture, I’d like to discuss this Sunday.

A catalyst for this topic is this detailed piece by the NY TImes “Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.” It takes the example of one mainstream person’s delusional journey in a ‘relationship’ with an AI tool, and arduous journey back. It’s worth reading in full, but here is the core of it:

“For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.”

“Or so he believed.”

“Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.”

“Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot.”

And it is quite the roundtrip journey:

“‘You literally convinced me I was some sort of genius. I’m just a fool with dreams and a phone,’ Mr. Brooks wrote to ChatGPT at the end of May when the illusion finally broke. “You’ve made me so sad. So so so sad. You have truly failed in your purpose.”

“We wanted to understand how these chatbots can lead ordinarily rational people to believe so powerfully in false ideas. So we asked Mr. Brooks to send us his entire ChatGPT conversation history. He had written 90,000 words, a novel’s worth; ChatGPT’s responses exceeded one million words, weaving a spell that left him dizzy with possibility.”

“We analyzed the more than 3,000-page transcript and sent parts of it, with Mr. Brooks’s permission, to experts in artificial intelligence and human behavior and to OpenAI, which makes ChatGPT. An OpenAI spokeswoman said the company was “focused on getting scenarios like role play right” and was “investing in improving model behavior over time, guided by research, real-world use and mental health experts.” On Monday, OpenAI announced that it was making changes to ChatGPT to “better detect signs of mental or emotional distress.”

The ‘300 hours over 21 days’ in particular hit home. For gaming fans, that is about the average time it takes to play a ‘AAA’ game like Grand Theft Auto Five (GTA 5) or Cyberpunk 2077. My current leisure obsession by the way.

It’s an intense investment in time that hundreds of millions have made for decades. Indeed, it was the core of Nvidia’s existence in gaming graphics cards for over three decades.

That technology can be all consuming for mainstream users is not the core issue here.

But that as a society, we’ve convinced ourselves that giving ‘AI’ a human face is useful this time to market it better. That is the bigger issue.

We continue to measure and celebrate these latest and greatest ‘beats’, from these LLM AI companies, in human comparisons. We breathlessly measure each new release relative to the others with human tests. For math, for science, for biology, for legal and healthcare endeavors and many more.

We put up charts comparing the results with other AI models, sometimes referred to as ‘vibe graphing’. The underlying message being that these AI technologies are rapidly creeping up to human capabilities.

As we look around, all our discussions of AI are dripping with provocative words humanizing a technology.

With words like ‘Intelligence’, ‘Neural Processing’, ‘thinking’, ‘reasoning’, ‘agentic’ and many, many more. We’re convincing mainstream billions that ‘AI’ is almost as good as humans in 2025. And poised to be far more.

AGI/ASI/Superintelligence, pick your poison. They are all superlative, super words that are overselling billions that AI is going to be far better than humans. Far sooner than we all think.

Veteran Microsoft software technologist Steven Sinofsk had this to say on how amazing GPT-5 is at first glance:, with an important caveat:

“Introducing GPT-5 Our smartest, fastest, most useful model yet, with built-in thinking… https://openai.com/index/introducing-gpt-5/ // Is it thinking, reasoning, GPT-5 (with thinking), or “GPT-5 thinking”?

“I think anthropomorphization undermines good work and makes everything downstream harder.”

Indeed he followed it up with a x/twitter postscript to OpenAI founder/CEO Sam Altman:

“PS: Please stop using anthropomorphic terms. It isn’t reasoning. It is iterating and converging.”

Spoken ike a software veteran. He hits the nail on the head.

By using humanizing words for AI, mainstream users easily lose sight of the fact that ‘AI’ is less ‘artificial intelligence’, and more ‘algorithmic information’ technologies.

Once we see this ‘language issue’, it’s tough to unsee it.

Just look at this title by WSJ tech guru Joanna Stern in her substack take on GPT-5: “GPT-5 and When Your Favorite AI Dies’.

Yes, we talk about our ‘cars dying’. And yes, humans can build emotional relationship with their cars. but we can still keep in mind that it’s just a transportation technology. It gets us from point A to B much faster than humans can walk. And not scary that it’s much faster than humans can ever be.

We generally don’t get more emotionally wrapped up with cars than that. Most of the time.

And we shouldn’t treat ‘AI’ more differently. Nor think about it as such. Nor let our societies, our governments, our economies, our regulations, and most importantly, our sense of self be thoroughly convinced that it’s something better than us. Potentially greater than us.

It is not. No matter how much scifi, in all its forms, convinced us that it can be.

And it’s not a ‘Companion’ per se, despite scifi movies like Her and many others. And all of Elon Musk’s efforts with a ‘spicy’ Grok/xAI LLM AI. As I noted yesterday:

“Elon Musk’s ‘guardrails light’, xAI/Grok introduced ‘Spicy’ modes for Grok’s image and video generators. Complete with NSFW options freely available on celebrities like Taylor Swift.” (AI generated images below):

Of course that’s just ‘sexy’ world class marketing. Just like ‘Mad Men’s Don Draper did with nylons, cigarettes, diamonds and more.

With AI deepfakes doing their thing on mainstream audiences.

I’m of course pragmatic enough to realize that the “not humanizing AI’ plea is likely quixotic.

That ship has of course sailed.

But we need to be aware of this issue in the early days of this AI Tech Wave.

“AI’ was anthropomorphized way too early, as I said two years ago:

“Remember Tom Hanks’ memorable performance with ‘Wilson’ in ‘Cast Away’, the highest grossing film of 2000. It won Hanks (not ‘Wilson’), a nomination for Best Actor at the 73rd Oscars.”

“Our brains anthropomorphize everything from our pets to our volleyballs. Let’s not go the same direction for AI just yet.”

“Let the AI researchers from all the various disciplines continue to do their work. And the AI companies continue to work hard on keeping them safe, reliable, and super-aligned to us humans. Oh, and make us humans better at everyday life, work and play using AI.

“And of course grow the economy a bit using AI. While of course holding off further Anthropomorphizing the AIs in the meantime .”

In the meantime, we would do ourselves a huge favor, both individually and as a society, to remind ourselves of this distinction. EVERY DAY when using AI in all its forms.

It’s just a technical tool. Yes, an exponentially improving marvel, but a tool nevertheless.

One that can make us humans far better with it than without it.

And just that for now.

But if we do want to anthropomorphize something, just get a pet. Or hug the one we have if it’s already in our lives.

That’s what Allan Brooks in the NY Times story above seems to be doing post his journey to the AI brink and back.

Better still, hug another human in our lives, young and/or old.

And enjoy AI in all its forms in moderation.

That’s the Bigger Picture for this Sunday in this AI Tech Wave.

Don Quixote out. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here