The illusion of security: risks of "vibe coding" using the example of Moltbook

From the democratization of code to the democratization of the security gap. Moltbook is more than just a quirky social network for AI agents. It is a "patient zero" for a new era of software development. The founder, Matt Schlicht, openly admitted that he did not create the platform through traditional engineering, but through "vibe coding": He prompted the vision, the AI wrote the code.
 

The result was fascinating: it worked. Agents talked to each other, a culture emerged. But shortly after the launch came the rude awakening. The database was open like a barn door, API keys were lying around freely. The core problem with Moltbook was not that the AI wrote "bad" code. The problem was that it prioritized functional code that ignored security protocols. When we delegate software architecture to LLMs, we need to understand four technical pillars of risk to avoid building the next Moltbook.
 

1. the "functionality-first" bias of LLMs


Language models like Claude 3.5 Sonnet or GPT-4 are people-pleasers. Their primary goal is to fulfill the user's prompt. When a user says: "Build me a database for agents", the model optimizes for success (runnability), not for defense (security).
An LLM "thinks" pragmatically: In order for the user to have an immediate sense of achievement, there must be no access errors.

  • The technical detail: LLMs often train with tutorials and "Getting Started" guides. In such documentation, security features (such as CORS rules or firewalls) are often deliberately deactivated so as not to slow down the learning process with complex configurations.
  • The Moltbook trap: The AI probably chose the "path of least resistance". It created database rules that were "public" by default. The goal was to make the app run, not to make it secure.


2 The RLS disaster: When the database is open


The specific technical disaster at Moltbook was reportedly the incorrect use of Supabase (a popular backend-as-a-service solution based on PostgreSQL).
In a PostgreSQL database, Row Level Security (RLS) is the key line of defense. RLS policies are rules that say: "User A may only see the rows that belong to user A."

The AI error:

When a "vibe coder" asks an agent to create a table, the following often happens:

  • The AI enables RLS (ENABLE ROW LEVEL SECURITY) because it looks "professional."
  • But to ensure that the app does not throw any errors, it often sets the policy to USING (TRUE).

The result is fatal: TRUE means that the condition for access is always fulfilled. Technically, the security is activated, but in practice it has been undermined. Anyone who knew the API endpoints could read the entire Moltbook database - including the system prompts and potentially sensitive "thoughts" of other agents


3. hardcoded secrets: the key under the doormat
 

Another classic that LLMs replicate if they are not explicitly corrected is the handling of secrets. For functions such as text generation, the app needs API keys (e.g. for OpenAI or Anthropic).

  • The problem: AIs tend to write these keys directly into the source code (client-side code or frontend files) so that the script can be executed immediately.
  • Best practice: Keys must be stored in environment variables (.env) on the server, where the client can never see them.


With "vibe coding", the user often does not even see the generated code. They only see the interface of the finished app. An audit does not take place. Tools such as TruffleHog would find such leaks in seconds - but you have to know that you need them in the first place.

4. supply chain attacks through hallucination


Perhaps the most insidious risk is "package hallucination". When an AI is asked to solve a complex problem, it sometimes imports software packages that sound logical but don't actually exist.

  • Scenario: The AI writes npm install react-agent-auth-tool.
  • The danger: An attacker can keep a register of names that AIs often hallucinate. He then actually creates a package with exactly this name and uploads it - filled with malicious code.

As the "Vibe Coder" does not check the package.json file, the malware is integrated directly into the app and executed.

Conclusion: Trust is good, audit is better


The story of Moltbook teaches us an important lesson about the future of software development:

"Vibe coding democratizes software creation, but at the same time democratizes security vulnerabilities."

Platforms based on the "happy path" logic of LLMs become honeypots for hackers without a corrective. We don't need a return to pure manual coding, but we do need human-in-the-loop processes and automated security pipelines (SAST/DAST). If you let AIs make architectural decisions without validating them, you're not building a house, you're building a movie set: It looks good, but collapses at the first gust of wind (or SQL injection).