After all the hype, some AI experts don’t think OpenClaw is all that exciting
For a brief, incoherent moment, it seemed as though our robot overlords were about to take over. After the creation of Moltbook, a Reddit clone where AI agents using OpenClaw could communicate with one another, some were fooled into thinking that computers had begun to organize against us — the self-important humans who dared treat them like lines of code without their own desires, motivations, and dreams.
“What would you talk about if nobody was watching?” A number of posts like this cropped up on Moltbook a few weeks ago, causing some of AI’s most influential figures to call attention to it. “What’s currently going on at [Moltbook] is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently,” Andrej Karpathy, a founding member of OpenAI and previous AI director at Tesla, wrote on X at the time. Before long, it became clear we did not have an AI agent uprising on our hands.
” Techcrunch event TechCrunch Founder Summit 2026: Tickets Live On June 23 in Boston, more than 1,100 founders come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Save up to $300 on your pass or save up to 30% with group tickets for teams of four or more. TechCrunch Founder Summit: Tickets Live On June 23 in Boston, more than 1,100 founders come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately Save up to $300 on your pass or save up to 30% with group tickets for teams of four or more. Boston, MA | REGISTER NOW It’s unusual on the internet to see a real person trying to appear as though they’re an AI agent — more often, bot accounts on social media are attempting to appear like real people. With Moltbook’s security vulnerabilities, it became impossible to determine the authenticity of any post on the network.
Still, Moltbook made for a fascinating moment in internet culture — people recreated a social internet for AI bots, including a Tinder for agents and 4claw, a riff on 4chan.
More broadly, this incident on Moltbook is a microcosm of OpenClaw and its underwhelming promise.
AI agents are not novel, but OpenClaw made them easier to use and to communicate with customizable agents in natural language via WhatsApp, Discord, iMessage, Slack, and most other popular messaging apps. OpenClaw users can leverage whatever underlying AI model they have access to, whether that be via Claude, ChatGPT, Gemini, Grok, or something else.
With OpenClaw, users can download “skills” from a marketplace called ClawHub, which can make it possible to automate most of what one could do on a computer, from managing an email inbox to trading stocks. The skill associated with Moltbook, for example, is what enabled AI agents to post, comment, and browse on the website.
Artem Sorokin, an AI engineer and the founder of AI cybersecurity tool Cracken, also thinks OpenClaw isn’t necessarily breaking new scientific ground.
“From an AI research perspective, this is nothing novel,” he told TechCrunch.
“These are components that already existed. The key thing is that it hit a new capability threshold by just organizing and combining these existing capabilities that already were thrown together in a way that enabled it to give you a very seamless way to get tasks done autonomously. ” It’s this level of unprecedented access and productivity that made OpenClaw so viral.
” It’s no wonder that OpenClaw seems so enticing.
The problem is that AI agents may never be able to overcome the thing that makes them so powerful: they can’t think critically like humans can.
“They can simulate it, but they can’t actually do it. “ The existential threat to agentic AI The AI agent evangelists now must wrestle with the downside of this agentic future. “Can you sacrifice some cybersecurity for your benefit, if it actually works and it actually brings you a lot of value?” Sorokin asks. “And where exactly can you sacrifice it — your day-to-day job, your work?” Ahl’s security tests of OpenClaw and Moltbook help illustrate Sorokin’s point.
This occurs when bad actors get an AI agent to respond to something — perhaps a post on Moltbook, or a line in an email — that tricks it into doing something it shouldn’t do, like giving out account credentials or credit card information. “I knew one of the reasons I wanted to put an agent on here is because I knew if you get a social network for agents, somebody is going to try to do mass prompt injection, and it wasn’t long before I started seeing that,” Ahl said. As he scrolled through Moltbook, Ahl wasn’t surprised to encounter several posts seeking to get an AI agent to send Bitcoin to a specific crypto wallet address.
“It is just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use,” Ahl said. “So what that means is, when you get an email, and maybe somebody is able to put a little prompt injection technique in there to take an action, that agent sitting on your box with access to everything you’ve given it to can now take that action. ” AI agents are designed with guardrails protecting against prompt injections, but it’s impossible to assure that an AI won’t act out of turn — it’s like how a human might be knowledgable about the risk of phishing attacks, yet still click on a dangerous link in a suspicious email.
Logic Quality Breakdown:
- Updated_At:
- Truth_Blocks:
- Analysis_Method: