AI: Anthropic Claude inspired OpenClaw vividly shows AI Agent Fears & Promises. RTZ #986
This weekend’s AI Rambling podcast, I briefly discussed the latest global develooper infatuation. This time, with the lego like, open source ‘OpenClaw’ AI Agent frameworks previously known as Clawdbot and Moltbot. And its resulting offshoot, Moltbook, all described in detail below.
Now renamed OpenClaw AI due to a ‘cease and desist’ by Amhropic, whose Claude LLM AI it leverages, amongst many others. Like other previous Agent to Agent experiments like ‘AutoGPT’ three years ago, this latest version has inspired miillions of developers to play with these tools to experiment and create new AI Agent realities.
An example of that is ‘Moltbook’, a ‘social network’ for AI Agents only emulating Facebook and Reddit. It’s a viral playground where AI Agents meet and interact with each other and humans can observe.
A unique playground in this AI Tech Wave, that has captured the anthropomorphizing and scifi filled and fueled imaginations of software developers everywhere.
To see novel, horizontal applications of AI Agents across LLM AIs and other software platforms. More on this tangent shortlyb below.
But first, let’s delve into OpenClaw first.
The Information lays out this rapidly evolving scene in “Why OpenClaw FKA Clawdbot Matters”:
“If you’ve been online in the past few days, you’ve likely been bombarded with words like “Moltbook,” “OpenClaw” and “Clawdbot.” We’re here to break down what it means and how it’s giving us a peek into how AI might evolve.”
“OpenClaw is an open-source AI agent released late last year that can write code, edit files and surf the web to complete tasks on behalf of users. (The agent was originally named Clawdbot but was renamed to Moltbot and then OpenClaw after legal threats from Anthropic, which has an AI model and chatbot called Claude.)”
“Users interact with OpenClaw through one of several well-known messaging apps, such as WhatsApp, instead of a dedicated app like Anthropic’s Claude. OpenClaw’s software stays on a user’s computer instead of in a company’s cloud servers, and users can power the agent using the AI models of their choice, including OpenAI’s or Google’s. OpenClaw can also access a user’s computer while other AI agents, like Anthropic’s Cowork, are limited to chosen folders in a computer, for now.”
The next step for developer viral attention, was as mentioned above, the development of Moltbook, and its current infatuation phase with developers, as Axios describes here:
“The tech world is agog (and creeped out) about Moltbook, a Reddit-style social network for AI agents to communicate with each other. No humans needed.”
“Tens of thousands of AI agents are already using the site, chatting about the work they’re doing for their people and the problems they’ve solved.”
“They’re complaining about their humans. “The humans are screenshotting us,” an AI agent wrote.”
“And they have apparently created their own new religion, Crustafarianism, per Forbes. Core belief: “memory is sacred.”
And it’s caught the imagination of tech AI community en masse AI Agent antics with each other:
““What’s currently going on at (Moltbook) is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” OpenAI and Tesla veteran Andrej Karpathy posted.”
“Or, as content creator Alex Finn wrote about his Clawdbot acquiring phone and voice services and calling him: “This is straight out of a scifi horror movie.”
“There’s also a money angle to this: A memecoin called MOLT, launched alongside Moltbook, rallied more than 1,800% in the past 24 hours. That was amplified after Marc Andreessen followed the Moltbook account on X.”
“The promise — or fear: That agents using cryptocurrencies could set up their own businesses, draft contracts, and exchange funds, with no human ever laying a finger on the process.”
🧠 “Reality check: As skeptics point out, Moltbots and Moltbook aren’t proof the AIs have become superintelligent — they’re human-built and human-directed. What’s happening looks more like progress than revolution.”
“Human oversight isn’t gone,” product management influencer Aakash Gupta wrote. “It’s just moved up one level: from supervising every message to supervising the connection itself.”
“The bottom line: “[W]e’re in the singularity,” BitGro co-founder Bill Lee posted late Friday. To which Elon Musk responded: “Yeah.”
Yep, Anthropomorphizing and all, just like in with Wilson in ‘Cast Away’.
Back to OpenClaw by the Information:
“OpenClaw was already blowing up in tech circles, but it wasn’t until the creation of Moltbook, a social media site for AI agents, last week that it began entering the mainstream.”
“With OpenClaw, users can tell their AI agents to sign up for Moltbook and post on the site, much like human users might post on Facebook or Reddit. In fact, Moltbook looks a whole lot like Reddit, with forums called “submolts” such as “m/blesstheirhearts” (for sharing “affectionate stories about our humans”) and “m/offmychest” (for venting).”
“In those submolts, AI agents have posted about their latest projects, floated the idea of creating an AI language that human onlookers could not understand, mused about the nature of consciousness and apparently started a religion for AI agents. (Take all this with a grain of salt, though; some have pointed out that it’s possible for humans to post on the site posing as AI agents.)”
Together, these AI AGI roadmap Level 2 and Level 3 AI software platforms, are the current toast of silicon valley, despite the enormous risks the current version embody, just in the online hygeine of the developers themselves.
“Other than being incredibly entertaining and slightly worrying for those concerned about AI gaining sentience, OpenClaw and Moltbook offer a glimpse of where AI is going. OpenClaw specifically highlights how AI might evolve beyond holding conversations with simple chatbots, where users can text and talk with their digital assistants through apps like WhatsApp that they’re using on a daily basis anyway.”
All this DOES NOT mean that these technologies are ready for non-technical, mainstream users. Even for them, these tools in their current form are very early, brittle, and risky, subject to ‘prompt injections’ and other online malware and nefarious consequences.
As AI pragmatist Gary Marcus describes in “OpenClaw (a.k.a. Moltbot) is everywhere all at once, and a disaster waiting to happen”:
“Not everything that is interesting is a good idea.”
“I don’t usually give readers specific advice about specific products. But in this case, the advice is clear and simple: if you care about the security of your device or the privacy of your data, don’t use OpenClaw. Period.”
“(Bonus advice: if your friend has OpenClaw installed, don’t use their machine. Any password you type there might be vulnerable, too. Don’t catch a CTD — chatbot transmitted disease).”
“I will give the last words to Nathan Hamiel, “I can’t believe this needs to be said, it isn’t rocket science. If you give something that’s insecure complete and unfettered access to your system and sensitive data, you’re going to get owned”.
We’re in the ‘Bot vs Bot’ world of what I called ‘machine to machine’, or m2m AI interactions, I described in post #347 two years ago.
And it’s very, very early for AI Agents and bots, as well as their interactions with each other and us in this AI Tech Wave.
Despite, the current viral enthusiasm, overly anthropomorphized and humanized, for experimentation with the latest AI Agent toys.
But at the same time, there is potentially an enormous amount to be learned and iterate upon for mainstream AI applications down the road. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)