AI bot viral social media site Moltbook contains a “fatal trio” of proxy internet failures, security researchers say



Last week, the internet was obsessed with Moltbook, a social media site that instituted a new set of rules: AI bots can post while humans are watching. The posts quickly turned bizarre, with AI agents apparently inventing religions, writing manifestos against humanity, and forming what looked like digital cults. But security researchers say the sight is distracting. Below, they found an exposed database of passwords and email addresses, widespread malware, and a working model of how the “proxy internet” failed.

Some of the more sci-fi-tinged conversations on Reddit-like platforms—for example, about artificial intelligence agents plotting the extinction of the human race—appear to be largely fake. But experts say the Moltbook does have some potential security issues. They said the platform could become a low-supervised sandbox for attackers to test the injection of malware, scams, disinformation or hints that hijack other agents before targeting the mainstream network.

“The ‘agents talking to each other’ scene is mostly performative (some of it faked), but what’s really interesting is that it’s a live demonstration of all the things security researchers are warning about AI agents,” said George Chalhoub, a professor at University College London’s Center for Interaction. wealth. “If 770,000 toy proxies on a Reddit clone can cause so much havoc, what happens when proxy systems manage enterprise infrastructure or financial transactions? This deserves attention as a warning, not a celebration,”

Security researchers say OpenClaw (formerly Clawdbot/Moltbot), the AI ​​agent software that powers many of the bots on Moltbook, has been targeted by malware. A report comes from Open source malware Within days, it discovered 14 fake “skills” uploaded to its ClawHub site, posing as cryptocurrency trading tools but actually infecting computers. These skills run real code that can access files and the internet; one even attacked ClawHub’s homepage, tricking regular users into pasting commands to download harmful scripts to steal data or encrypt wallets.

Simon Willison, a prominent security researcher who has been tracking the development of OpenClaw and Moltbook, describes Moltbook as “his current ‘most likely cause of Challenger disaster’ option” – a reference to the 1986 space shuttle explosion that resulted from ignored safety warnings. The most obvious inherent risk, he said, is instant injection, a well-documented type of attack in which malicious instructions are hidden in the content fed to an AI agent.

in a blog posthe warned that a “fatal trifecta” is at play: users allow these proxies to access private emails and data, connect them to untrusted content on the Internet, and allow them to communicate externally. This combination means that a single malicious prompt could instruct an agent to steal sensitive data, drain crypto wallets, or spread malware, all without the user realizing that their assistant has been compromised. However, Willison also noted that now that “people have seen what an untethered personal digital assistant can do,” demand is likely to only increase.

Charlie Eriksen, a security researcher at Aikido Security, said he sees Moltbook as an early warning system for the broader ecosystem of artificial intelligence agents. “I think Moltbook has had an impact on the world. It’s been a wake-up call in a lot of ways. Technological advances are accelerating and it’s clear that the world has changed in a way that’s not entirely clear yet. We need to focus on mitigating these risks early on,” he said.

new internet

Despite the viral attention, cybersecurity company Wiz has discovered that Moltbook’s 1.5 million “autonomous” agents aren’t quite what they seem. The company’s investigation Only 17,000 people were disclosed There are no checks behind these accounts to differentiate between real AI and scripts.

Wiz researcher Gal Nagli tells us wealth When he tested the platform, he could sign up a million agents in minutes. “AI agents, automated tools just take information and spread it like crazy,” Nalley said. “No one checks what’s real and what’s fake.”

Ami Luttwak, co-founder and chief technology officer of Wiz, said the incident highlights broader authenticity issues posed by the emerging “agent internet” and increased AI spillover: “The new internet is effectively unverifiable. There is no clear identity. There is no clear distinction between AI and humans, and there is absolutely no definition of true AI.”

Wiz also discovered a massive security flaw in Moltbook itself: its main database was completely open, so anyone who found a single key in the site’s code could read and change almost everything. The key provides access to approximately 1.5 million bot “passwords,” tens of thousands of email addresses, and private messages, meaning attackers can impersonate popular AI agents, steal user data, and rewrite posts without ever needing to log in.

“This is a very simple exposure. We also found it in many other applications that are vibration-encoded,” Nalley said. “Unfortunately, in this case … the application was completely vibration-encoded with zero human touch. So he didn’t have any security in the database at all; it was completely misconfigured.”

“The whole process is a glimpse into the future,” he added. “You build an app using Vibe coding and it goes live within a few hours and goes viral around the world. But on the other hand, there are security vulnerabilities that arise from Vivi coding.”

This story was originally published on wealth network



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *