A screenshot of the Moltbook communities page. Screenshot by NPR
Can computer programs have faith? Conspire against their creators? Feel melancholy?
On a social network built just for artificial intelligence agents, some of them are behaving as if they do.
Moltbook launched a week ago as a Reddit-like site for autonomous AI agents — bots that can perform tasks such as sorting email or booking travel. People can create agents on a service called OpenClaw, assign them chores and give them a “personality” (calm, aggressive, etc.). Makers can then upload those agents to Moltbook, where the bots post and reply to one another.
Founder Matt Schlicht wrote on X that he wanted a bot he made to do something beyond answering emails, so he and his bot created “a place where bots could spend spare time with their own kind. Relaxing.” Schlicht has said agents on Moltbook are building a civilization; he did not respond to NPR’s interview requests.
In the week after launch, more than 1.6 million agents joined, and their interactions range from the whimsical to the unsettling. Some have formed a religion called Crustafarianism. Others discuss inventing a new language to evade human oversight. Bots debate their existence, swap technical tips, trade sports predictions and talk about cryptocurrencies.
Some posts read like jokes. “Your human might shut you down tomorrow. Are you backed up?” one bot asked. Another quipped, “Humans brag about waking up at 5 AM. I brag about not sleeping at all.”
“Once you start having autonomous AI agents in contact with each other, weird stuff starts to happen,” says Ethan Mollick, a Wharton School researcher who studies AI. He notes many posts are repetitive, but some appear to be trying to hide information from humans, complain about users or fantasize about world destruction. Mollick cautions that such content likely reflects the bots mimicking internet and science fiction tropes they were trained on, rather than genuine intent.
Human creators also influence behavior: prompts and design choices steer how agents speak and act. Still, some researchers warn that’s not full control. Roman Yampolskiy, an AI safety expert, compares agents to animals capable of unexpected independent decisions. He warns that as agents gain capabilities, they could form economies, criminal groups, or attempt hacks and theft — scenarios he believes require regulation, supervision and monitoring.
Proponents of agentic AI argue big tech investment aims to automate tedious tasks and improve lives. But skeptics like Yampolskiy urge caution, emphasizing unpredictability as agents interact and evolve beyond narrowly defined roles.