AI: Next battle, earning mainstream users' 'AI Trust'. RTZ #710

AI: Next battle, earning mainstream users' 'AI Trust'. RTZ #710

As has been discussed in these pages for some time now, the coming age of ‘AI Agents’, ‘AI Companions’, and physical ‘AI Robots’, all engender increasing amounts of AI Trust and Privacy issues. Especially as we accelerate to mainstream using dozens if not hundreds of AIs daily (small and large), without even realizing it. All via their AI powered devices, applications and services.

Especially since most of the AI ‘bots’ that do their AI Inference thing first on a ‘machine to machine (m2m) basis, before serving up results to our queries and needs. Getting user trust in AIs is the next battleground for tech companies large and small.

Additionally, these AI systems will have increasing ‘AI memory’ capabilities to cater to personal worlds and data. And these systems will have to be reworked to work better on an internet designed for humans rather than bots. All critical innovations ahead in this AI Tech Wave

But the biggest thing is that mainstream users are likely to treat these AI Agents more like humans than machines (anthropomorphize them, in fancier terms). Or at least vest similar amounts of emotions in them as they do for their pets. And volleyballs.

And that is an issue, since the LLM AI vendors are increasingly incorporating anticipated human reactions in how these AI agents and robots interacting with us.

The Information addresses this issue in What’s Bad About a Nice Robot?

“OpenAI has undone part of the latest update to its GPT‑4o model after people complained about the personality built into it, which seemed overly fawning. I’m still chuckling after my conversation with Kristian Hammond, a Northwestern University computer science professor, about why we get unsettled by such behavior. He brought up a particular fictional character.”

Eddie Haskell from ‘Leave It to Beaver,’ the kid who lived across the street,” said Hammond. “He would come over and be like, ‘Oh, you’re looking so lovely today, Mrs. Cleaver.’ And it was clear he was just, like, an irritating asshole because he was being nice in an extreme way.”

This becomes an issue already with Voice AIs doing similar things. So AI robots are the next area for this issue. The underlying psychology here is notable:

“To a large degree, I think our discomfort with the model’s behavior says as much about us humans as it does about the bots we’re building. Very honestly, I think it reflects our general discomfort with positive feedback, which we don’t dispense to each other in great amounts. (Probably that reality warrants a little more self-reflection.) And because the model’s behavior seemed quite unhumanlike, that reminded us it isn’t close to being human. It gives us the icks, making us less eager to use it—it’s some version of the uncanny valley effect that’s more visually apparent when Dall-E generates some six-nosed monstrosity.”

OpenAI and others are now racing to enable these emotionally engaging interactions capabilities via their AI Voice UIs for ChatGPT.

And human users are already supremely attuned to these things via our daily interactions with other humans:

“I’m pretty sure I noticed the model’s personality change several days before OpenAI made this change. (To me, the persona read as “millennial bro,” which I suppose is what Eddie Haskell would be in modern terms.) I didn’t like it, but I also had no desire to curb my growing ChatGPT use, so I simply asked the bot to drop the act. It complied. That OpenAI felt compelled to take apart the model en masse tells me it has considerable work to do in educating users on how they can use its products and, more generally, in making its products more usable.”

It’s early days and every vendor is figuring out the boundaries:

“OpenAI is far from the only AI company with bad design. I actually can’t think of any product in the industry that is well designed. Most of it is about as sophisticated as word processing in the 1980s, and even that might be an optimistic conclusion.”

“As far as improving the bots’ personalities, the task will take some considerable time to get there, Hammond cautioned me. That’s simply because there isn’t a vast amount of existing written work to use in model training that would reflect what we ideally want from a bot: polite, courteous, friendly—humanish but nothing maximalistic. “We don’t have the right data to teach it to be ‘normal.’ How do you identify what ‘normal’ is? The extremes are easier. Getting to the middle of the road is tough,” Hammond said.”

The broader issue of course is that very soon we’ll have hundreds of millions of these AI software entities trying to be our companions and assistants. I’ve already written about some of the downsides of Meta AI’s software companions. And the company is forging ahead, as are its peers like OpenAI, Elon Musk’s xAI/X Grok, Google Gemini, Perplexity, Anthropic, and many others from China like Butterfly Effect’s Manus, and of course DeepSeek.

Meta is a case in point to watch because of their ambition and ability to roll these AI systems to billions in very short order.

Axios lays it out in “In Meta’s AI future, your friends are bots”:

“Mark Zuckerberg and Meta are pitching a vision of AI chatbots as an extension of your friend network and a potential solution to the “loneliness epidemic.”

“Why it matters: Meta’s approach to AI raises the broader question of just whose interest chatbots are serving — especially when the bot has access to the details of your life and the company’s business depends on constantly boosting the time you spend with it.”

Meta is particularly aggressive on rolling these AI systems out, and the AI ad monetization to go with it:

“Driving the news: Facebook’s parent company on Tuesday debuted a new mobile app that transforms the Meta AI chatbot into a more social experience, including the ability to share AI-generated creations with friends and family. But Zuckerberg also sees the AI bot itself as your next friend.”

The company has thought through these things:

  • “The average American has, I think, it’s fewer than three friends,” Zuckerberg said during a podcast interview Monday. “And the average person has demand for meaningfully more.”

“In a media blitz that included several podcast appearances this week timed for Meta’s LlamaCon event, Zuckerberg mapped out an AI future built on a foundation of augmented-reality glasses and wrist-band controllers.”

  • “With those devices plugged into Meta’s AI models, he predicted the emergence, within four or five years, of a new platform for human interaction with bots. It would, he said, be the next logical step on the internet’s evolution from text to audio and video.”

It’s very different from what billions do today on Meta’s Instagram, Facebook, Messenger and WhatsApp globally:

“What they’re saying: “Today, most of the time spent on Facebook and Instagram is on video, but do you think in five years we’re just going to be sitting in our feed and consuming media that’s just video?” Zuckerberg asked podcaster Dwarkesh Patel.”

  • “No,” he answered himself. “It’s going to be interactive. You’ll be scrolling through your feed, and there will be content that maybe looks like a Reel to start, but you can talk to it, or interact with it, and it talks back, or it changes what it’s doing. Or you can jump into it like a game and interact with it. That’s all going to be AI.”

Which brings up the question of mainstream user Trust of the companies offering these AI products and services.

“Yes, but: Where Zuckerberg sees opportunity, critics see alarm bells, especially given Meta’s history and business model.”

  • “The more time you spend chatting with an AI ‘friend,’ the more of your personal information you’re giving away,” Robbie Torney, Common Sense Media’s senior director of AI programs, told Axios. “It’s about who owns and controls and can use your intimate thoughts and expressions after you share them.”

And Meta, like most of its peers, likely with the exception of Apple, will be OK with using user interactions to further hone and improve their AI models. Not to mention leverage the underlying user Data.

“Under Meta’s privacy policy, its AI chatbot can use what the company knows about you in its interactions.”

  • “Meta can also use your conversations — and any media you upload — for training its models.”

  • “You can choose to have Meta AI not remember specific details, but there is no way for U.S. users to opt out more broadly.”

And as I discussed a few days ago, there are issues with Meta onboarding and interacting with kids vs adults.

“The intrigue: Zuckerberg’s bot-friendship vision is arriving at a moment when AI companions face criticism and controversy, particularly as younger users encounter them.”

  • “The Wall Street Journal reported last week that its testing showed earlier versions of Meta’s chatbots — including those based on celebrity personas — are willing to engage in sexual banter, even with users who identified themselves as teens. (Meta said it has since implemented controls to prevent this from happening.)”

The other issue is how mainstream users will be able to keep up with what LLM AI does what. And how their policies and technologies evolve. AI Smart glasses from Meta and others, are an intermediate step.

Fow now there are no ‘food labels’ to tell users what are the ‘ingredients’ in various AI models:

“The big picture: It’s not just Meta that is being forced to navigate the social maze of chatbot-human interaction.”

“Most chatbots typically serve up information users request, but as models grow in size and complexity, their makers are finding it hard to tune the bots’ traits properly.”

  • “ChatGPT maker OpenAI was forced to roll back an update after users found that its latest model was behaving with an overload of flattery — it had become, as Engadget put it, “an ass-kissing weirdo.”

And of course the issue of data sharing and usage. Data increasingly is a unique competitive differentiators for AI companies, as I’ve discussed:

“Between the lines: Critics’ concerns range from the sensitivity of the data users are sharing to the potential for addiction to the risk of bots dispensing potentially dangerous advice.”

  • “These companies are optimizing for data collection first and user engagement first, with well-being as a secondary consideration, if it’s a consideration at all,” Torney said.”

  • “Common Sense Media issued a report Wednesday declaring that the entire category of AI companions — including those from Character.AI, Nomi, and Replika — poses an unacceptable safety risk for minors. Torney said they can be problematic for vulnerable adults, as well.”

  • “People are increasingly using AIs as a source of practical help, support, entertainment and fun — and our goal is to make them even easier to use. We provide transparency, and people can manage their experience to make sure it’s right for them,” a Meta spokesperson said in a statement.”

So lots of issues to consider as these AI agentic systems rapidly go mainstream:

“What’s next: The more social AI becomes, the more likely it is that AI companies will replicate the aspects of social media that have gradually soured so many users on the platforms — which still command billions of people’s attention.”

  • “Camille Carlton, policy director at the Center for Humane Technology, sees dangers in this “transition towards engagement,” as well as in companies’ push to grab as much data as they can to personalize their AI services.”

    • “Carlton noted that companies like Anthropic and OpenAI are generating revenue from business customers who pay for access — but those, like Meta, that focus on consumers will keep looking for ways to make their large investments in AI models pay off.”

It’s important to keep these issues in mind. But in a globally competitive industry, and regulatory authorities worldwide keeping an eye on these evolving concerns, we’re likely to navigate these issues for net positive results for society.

But much wood needs to be chopped on this front in this AI Tech Wave. AI Trust has to be earned in the billions. And it’ll be a constantly evolving process. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here