Clawdbot, the AI agent that surprised the tech world, became one of the fastest-growing projects on GitHub because it promised something unusual.
Instead of just chatting, Clawdbot can interact with your files, send messages, schedule calendar events, and automate tasks on your own computer without sending your data to a large server.
Its ability to act on behalf of users makes it feel like a personal AI helper. This added to its popularity and helped it spread quickly among developers and curious users alike.
The project was recently renamed from Clawdbot to Moltbot after Anthropic objected to the original name, citing possible trademark conflicts. The developer agreed to the change to avoid legal issues, although the software itself remained unchanged.
What security checks revealed about Clawdbot (Moltbot).
The same characteristics that made Moltbot seem powerful also make him risky. Because AI can access your operating system, files, browsing data and connected services, researchers warn that it creates a large attack surface that could be exploited by malicious actors.
Security researchers actually found hundreds of Moltbot admin control panels exposed on the public internet because users deployed the software behind reverse proxies without proper authentication.
Because these panels control the AI agent, attackers can browse configuration data, obtain API keys, and even view full conversation histories from private chats and files.
In some cases, access to these control interfaces meant that outsiders essentially had the master key to users’ digital environments. This gives attackers the ability to send messages, run tools, and execute commands across platforms like Telegram, Slack, and Discord as if they were the owner.
Other research found that Moltbot AI often stores sensitive data such as tokens and credentials in plain text, making it an easy target for common infostealers and credential-harvesting malware.
The researchers also demonstrated proof-of-concept attacks in which supply chain exploits allowed malicious “skills” to be uploaded to the Moltbot library, enabling remote command execution on downstream systems controlled by unsuspecting users.
This isn’t just theory. According to The Register, analysts warn that an insecure Moltbot instance exposed to the internet can act as a remote backdoor.
There is also the possibility of prompt injection vulnerabilities where attackers trick the bot into executing malicious commands. something we’ve already seen in OpenAI’s AI browser Atlas.
If Moltbot is not properly protected by traditional security measures such as sandboxing, firewall isolation, or authenticated administrator access, attackers can gain access to sensitive information or even control parts of your system.
Because Moltbot can automate real-world actions, a compromised system could be used to spread malware or further infiltrate networks. Here’s what Heather Adkins, vice president of Google’s security team, thinks about the chatbot:
In short, Moltbot is an interesting step towards more powerful AI personal assistants, but its extensive system privileges and broad access mean you should think twice and understand the risks before installing it on your computer.
Researchers recommend treating it with the same caution you would use with any software that can touch critical parts of your system.




