Monday 30 March 2026Afternoon Edition

ZOTPAPER

News without the noise


Cybersecurity

Two Thirds of AI-Generated Apps Have Critical Security Vulnerabilities According to 100-App Audit

Hardcoded secrets, missing auth, and SQL injection plague apps built with Cursor, Lovable, and Bolt

Zotpaper2 min read
A security researcher who scanned 100 GitHub repositories built primarily with AI coding tools found that 67 had at least one critical vulnerability, with 45 per cent containing hardcoded API keys and secrets in their source code.

The audit targeted real-world applications built with popular AI coding assistants including Cursor, Lovable, Bolt.new, and v0. The findings paint a concerning picture of the security posture of AI-assisted development.

Among the most common issues: 38 per cent of apps had missing authentication on sensitive API routes, 31 per cent had SQL injection or cross-site scripting vulnerabilities, and a striking 89 per cent of apps built with Lovable were missing Supabase Row Level Security policies.

Cursor-generated code was particularly prone to Insecure Direct Object References, with 43 per cent of Cursor repos allowing any user to access any other user's data through sequential IDs without ownership checks.

The researcher noted that these are not theoretical concerns — many of the scanned applications were already deployed with real users, meaning the vulnerabilities represent active security risks.

Analysis

Why This Matters

As AI coding tools become mainstream, this audit quantifies what many security professionals have feared: AI generates functional code that often fails to implement basic security patterns. The scale of the problem — two thirds of apps with critical vulns — should alarm anyone shipping AI-generated code to production.

Background

AI coding assistants have exploded in popularity, with tools like Cursor and Lovable enabling non-experts to build and deploy applications rapidly. The security implications of this democratisation have been debated but rarely measured at this scale.

Key Perspectives

The issue is not that AI writes malicious code but that it optimises for functionality over security. Without explicit prompting for security measures, AI tools tend to skip authentication, authorisation, and input validation.

What to Watch

Whether AI coding tool makers respond with built-in security scanning, and whether this triggers a wave of breaches that forces the industry to act.

Sources