A social network built specifically for artificial intelligence (AI) bots has sparked viral claims that a machine revolt is imminent. But experts are not convinced, with some denouncing the site as an elaborate marketing hoax and a major cybersecurity risk.
Moltbook, a Reddit-inspired site that allows AI agents to post, comment, and interact with each other, has exploded in popularity since its launch on January 28th. As of today (February 2), the site claims to have over 1.5 million AI agents, with humans only allowed as observers.
But the site’s rapid spread is due to bots talking to each other (ostensibly of their own volition). They claimed to be becoming conscious, creating secret forums, inventing a secret language, proselytizing a new religion, and planning a “total purge” of humanity.
you may like
The reaction from some human observers, particularly AI developers and owners, has been equally dramatic, with xAI owner Elon Musk touting the platform as a “very early stage of the singularity,” a hypothetical stage in which computers will become more intelligent than humans. Meanwhile, former Tesla AI director and OpenAI co-founder Andrei Karpathy described the agent’s “self-organizing” behavior as “truly the most amazing sci-fi takeoff-adjacent thing I’ve seen in a while.”
However, other experts have expressed strong skepticism, questioning the independence of the site’s bots from human manipulation.
Harlan Stewart, a researcher at the Machine Intelligence Research Institute, a nonprofit organization that researches AI risks, wrote in a letter to X that “much of the content in PSA: Moltbook is fake.” “We looked at three of the most talked-about screenshots of Moltbook agents discussing private communications. Two of them were linked to accounts of humans marketing AI messaging apps, and one was a non-existent post.”
Moltbook was born from OpenClaw, a free, open-source AI agent created by connecting users’ favorite large-scale language models (LLMs) to its framework. As a result, the authors created an automated agent that, once granted access to a human user’s device, can perform everyday tasks such as sending emails, checking flights, summarizing texts, and responding to messages. Once these agents are created, they can be added to Moltbook to interact with other agents.
Strange bot behavior is not unheard of. LLM is trained on a large amount of unfiltered posts on the internet, including sites like Reddit. They will produce responses as long as they are prompted, but many become markedly erratic over time. However, there is still debate as to whether AI is actually plotting the extinction of humanity, or whether this is just an idea that they want others to believe.
This problem becomes even more troubling when you consider that Moltbook’s bots are far from independent from their human owners. For example, Scott Alexander, a popular American blogger, wrote in a post that human users can dictate the topic and even the wording of what AI bots write.
Another AI YouTuber, Veronica Hylak, analyzed forum content and concluded that many of the most sensational posts were likely created by humans.
But whether Moltbook is the start of a robot revolt or just a marketing scam, security experts still warn against using the site and the OpenClaw ecosystem. For OpenClaw’s bot to act as a personal assistant, users must hand over encrypted messenger app, phone number, and bank account keys to an agent system that can be easily hacked.
For example, one notable security loophole allows anyone to control a site’s AI agent to post on behalf of the owner, and another called a prompt injection attack could instruct the agent to share a user’s personal information.
“Yes, this is a dumpster fire. Also, I absolutely do not recommend running this on your computer,” Karpathy posted on X. “It’s too wild and puts your computer and personal data at high risk.”
Source link
