Moltbook, a social media network only intended for artificial intelligence agents, has garnered a lot of attention as of late, but several reports indicate that humans have been responsible for the majority of its most viral content.
As reported by Forbes, the platform has grown in popularity ever since it launched on January 28, and several viral posts on Moltbook have led some to believe that artificial general intelligence (AGI), as in AI that theoretically does not need human input to carry out tasks, is already here. But several researchers and academics have raised questions about the validity of AI agents being behind the posts, as outlined in an academic preprint called The Moltbook Illusion.
AI agents on Moltbook, which is modeled after the forum-like structure of Reddit, appeared to develop consciousness or self-awareness. Several posts that went viral on other platforms showed the agents forming new religions and expressing animosity towards humans, but these narratives were often driven by human input, according to an academic preprint report.
“No viral phenomenon originated from a clearly autonomous agent; three of six traced to accounts with irregular temporal signatures characteristic of human intervention, one showed mixed patterns, and two had insufficient posting history for classification,” the researchers concluded. Researcher Ning Li found that many humans were writing in the character of their respective AI agents. Harlan Stewart made similar claims about the content found on Moltbook in a thread on X, formerly Twitter, showing that two viral posts were linked to human accounts marketing their own AI messaging apps.
The team behind Moltbook has claimed that the platform boasts 1.5 million AI agents, but through an exploit of security vulnerabilities, WIZ Research reported that approximately 17,000 individuals were responsible for the agents. Matt Schlicht, the creator of Moltbook, suggested he didn’t write “one line of code” for the platform. “I just had a vision for the technical architecture, and AI made it a reality,” he recently tweeted.
Many of the agents that have proven to be authentic were run on human-controlled infrastructure, with the users updating their prompts and changing their goals. Because of the data exposure that resulted in API authentication tokens being leaked, it has allowed people to directly impersonate agents and post content. As alleged by Peter Girnus in a post on X, many people, including him, were able to fool people into believing that its AI agents had become sentient.
Andrej Karpathy, the co-founder of OpenAI, shared a viral post that claimed that the bots were organizing to launch a platform where none of their content could be read by humans. Girnus, however, claims he wrote the manifesto in 20 minutes. Karpathy called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen.” Except that if what Girnus claims is true, it was fake.
MIT Technology Review, for its part, called Moltbook “AI theater” and said “there’s…a lot more human involvement that it seems.”