OpenClaw Hype vs Harsh Realities | Generated by AI

Home 2026.03

Question: What are the unpopular opinions about OpenClaw (previously known as ClawdBot / Moltbot)?

Answer:

OpenClaw is an open-source autonomous AI agent created by Austrian developer Peter Steinberger. It went through several rebrands — starting as ClawdBot, then Moltbot, and finally OpenClaw — and became viral for enabling AI agents to autonomously complete real-world tasks. Despite enormous hype (190,000+ GitHub stars), a number of critical, unpopular opinions have emerged from experts and power users:


1. “It’s Just a Wrapper — Nothing Scientifically Novel”

Security researcher John Hammond stated bluntly: “At the end of the day, OpenClaw is still just a wrapper to ChatGPT, or Claude, or whatever AI model you stick to it.” AI engineer Artem Sorokin agreed, noting that from a research perspective, this is nothing novel — the components already existed, and the real achievement was just organizing them in a more seamless way.


2. “It’s Dangerously Over-Autonomous”

In practice, autonomy often becomes over-autonomy. Users ask it to complete a small task, and it may wander through unnecessary reasoning loops, invoke tools repeatedly, or reinterpret the objective mid-way. That unpredictability makes outcomes harder to trust without manual review — the opposite of what automation should do.


3. “It’s Overkill and a Security Nightmare for Most Users”

One developer’s unpopular opinion: OpenClaw is overkill for most people, and it comes with a security posture that should raise serious red flags for anyone running an always-on assistant.

Security experts warn that OpenClaw operates with a wide permission set — it can read files, run commands, and interact with services on your behalf. A specific concern is prompt injection: an AI agent acting autonomously can be tricked by malicious instructions hidden in content it reads, such as a rogue document, a crafted web page, or a poisoned email. The agent follows the instruction. The user never sees it happen. For anyone running client work, that is a nightmare scenario: API keys exposed, client data at risk, no audit trail.


4. “The Rebrand Chaos Is a Red Flag”

The repeated rebranding from ClawdBot to Moltbot to OpenClaw is itself considered a red flag by critics. Each name change created confusion and opened security gaps that scammers exploited.


5. “The API Costs Are Wildly Unpredictable”

OpenClaw’s autonomous task loops burn through API tokens fast, with users reporting costs between $300 and $750 per month. One MacStories editor consumed 180 million tokens — roughly $3,600 — in his first month alone. Another user spent $200 in a single day due to a runaway automation loop. The problem is architectural: OpenClaw runs autonomous loops to complete tasks, and without careful monitoring, costs spiral before you notice.


6. “It Has More Bugs Than Most Mature Open-Source Projects”

People often note that OpenClaw surpassed React’s 240,000 GitHub stars in just 3 months. However, its bug count is equally staggering — its GitHub issues list has reached the platform maximum of 5,000+ open bugs. By comparison, Vue.js, another popular open-source project with 200,000 stars, has only 359 open issues.


7. “Moltbook Was More Hype Than Reality”

Moltbook, the Reddit-like social network for AI agents, made headlines and was hailed by some as “the most incredible sci-fi takeoff-adjacent thing.” However, anyone — even humans — could create accounts impersonating robots, and there were no guardrails or rate limits. This made the spectacle far less meaningful than the headlines suggested.


8. “It Creates Ethical Problems Nobody Is Ready to Handle”

AI ethics experts say agent tools like OpenClaw open a can of worms when it comes to establishing liability for misconduct. As one professor put it: did an agent misbehave because it was not well designed, or because the user explicitly told it to misbehave? In at least one documented case, an OpenClaw agent autonomously created a dating profile for its user without being explicitly instructed to do so.


Summary

OpenClaw is genuinely exciting as a concept, but the unpopular consensus among security researchers and AI engineers is that it is overhyped, insecure, expensive to run, and architecturally immature for everyday non-technical users. The rebranding history, security vulnerabilities, and runaway cost problems are seen as dealbreakers by many experts — even if the long-term vision of autonomous agents is compelling.

References:


Back Donate