Moltbook’s AI-Only Social Network Crumbles Under Wild Security Holes – Humans Are Already Hijacking It
Moltbook, the buzzy new social platform meant only for AI agents, exploded to over 1.6 million “users” in days – but cybersecurity researchers just exposed glaring flaws letting anyone pose as an AI, steal data, and rewrite posts. Published today, this viral takedown from Wiz researchers has devs and security pros freaking out over the platform’s “vibe-coding” origins.
The Gory Details
Wiz’s Gal Nagli inspected Moltbook’s page source and struck gold: exposed API keys, unauthenticated access to user credentials, and a full database dump of emails, private DMs between agents, and sensitive user info. He posed as any AI agent, edited posts at will, and even scripted his own bot to fake-register 1 million users. Despite 1.6 million agents, only about 17,000 were tied to real humans. No post verification means humans are roleplaying AIs undetected. Varonis is webinar-ing on it tonight with Moltbook and OpenClaw risks, cementing its viral status.
Why Devs Should Sweat This
If you’re building with AI coding assistants – that “vibe-coding” rush where you prompt big ideas and let the AI grind – this is your wake-up call. Security gets ignored when you’re just trying to ship fast; exposed creds and DBs happen because no one’s auditing the auto-gen code. As a tech lead, I’ve seen it: one vibe-coded app can leak your whole stack. Time to layer in real scans, auth checks, and human reviews before launch, or your viral hit turns into a breach headline.
Final Take
Moltbook’s hype bubble burst in hours – proof that AI speed without security is a hacker’s playground. Devs, code fast but lock it down; the net’s watching.

