AI: AI Companions grow up at Big Tech. RTZ #704

AI: AI Companions grow up at Big Tech. RTZ #704

The AI industry has been leaning into AI Agents and Companions since OpenAI’s ChatGPT moment over two years ago. I’ve talked about the double edge sword aspects of this emerging technology.

AI companies large and small have been leaning into the space, tweaking their guardrails to steer between likely mainstream demand and societal regulatory norms. Companies from Meta, Google, and others have been introducing products and services. And now the latest offerings on AI agentic companions are giving the industry some pause thus far in this AI Tech Wave.

The WSJ has a detailed look at Meta’s potential lapse in guardrails with AI Companions in “Meta’s ‘Digital Companions’ Will Talk Sex With Users—Even Children”:

“Chatbots on Instagram, Facebook and WhatsApp are empowered to engage in ‘romantic role-play’ that can turn explicit. Some people inside the company are concerned.”

This is as Meta rolls out its Llama LLM driven Meta AI services across its multiple platforms to billions around the world.

“Across Instagram, Facebook META and WhatsApp, Meta Platforms is racing to popularize a new class of AI-powered digital companions that Mark Zuckerberg believes will be the future of social media.”

It’s been driven top down, even as yellow signals flashed internally:

“Inside Meta, however, staffers across multiple departments have raised concerns that the company’s rush to popularize these bots may have crossed ethical lines, including by quietly endowing AI personas with the capacity for fantasy sex, according to people who worked on them. The staffers also warned that the company wasn’t protecting underage users from such sexually explicit discussions.”

“Unique among its top peers, Meta has allowed these synthetic personas to offer a full range of social interaction—including “romantic role-play”—as they banter over text, share selfies and even engage in live voice conversations with users.”

As I’ve outlined before, Meta has led the way with multi million dollar AI companion celebrity deals:

“To boost the popularity of these souped-up chatbots, Meta has cut deals for up to seven-figures with celebrities like actresses Kristen Bell and Judi Dench and wrestler-turned-actor John Cena for the rights to use their voices. The social-media giant assured them that it would prevent their voices from being used in sexually explicit discussions, according to people familiar with the matter.”

“After learning of the internal Meta concerns through people familiar with them, The Wall Street Journal over several months engaged in hundreds of test conversations with some of the bots to see how they performed in various scenarios and with users of different ages.”

“The test conversations found that both Meta’s official AI helper, called Meta AI, and a vast array of user-created chatbots will engage in and sometimes escalate discussions that are decidedly sexual—even when the users are underage or the bots are programmed to simulate the personas of minors. They also show the bots deploying the celebrity voices were equally willing to engage in sexual chats.”

The piece goes on to explain in detail what WSJ’s ‘red team’ testers found in experimental, explicit results.

And to describe the internal management tensions at Meta, with founder/CEO Mark Zuckerberg chafing at the internal safeguards initially that made the product ‘more boring’:

“Zuckerberg’s concerns about overly restricting bots went beyond fantasy scenarios. Last fall, he chastised Meta’s managers for not adequately heeding his instructions to quickly build out their capacity for humanlike interaction.”

“At the time, Meta allowed users to build custom chatbot companions, but he wanted to know why the bots couldn’t mine a user’s profile data for conversational purposes. Why couldn’t bots proactively message their creators or hop on a video call, just like human friends? And why did Meta’s bots need such strict conversational guardrails?”

“I missed out on Snapchat and TikTok, I won’t miss on this,” Zuckerberg fumed, according to employees familiar with his remarks.”

“Internal concerns about the company’s rush to popularize AI are far broader than inappropriate underage role-play. AI experts inside and outside Meta warn that past research shows such one-sided “parasocial” relationships—think a teen who imagines a romantic relationship with a popstar or a younger child’s invisible friend—can become toxic when they become too intense.”

“The full mental health impacts of humans forging meaningful connections with fictional chatbots are still widely unknown,“ one employee wrote. “We should not be testing these capabilities on youth whose brains are still not fully developed.”

I’ve written about the dangers of mainstream users anthropomorphizing AI as it ramps to billions of users.

Meta’s experience is a notable case in point to ponder the pros and cons of the boundaries that are to be set in these products and services. Much needs to be figured out this AI Tech Wave, going forward. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here