Lovable? More Like Hackable.
Lovable - Hackable

Lovable? More Like Hackable.

I Found 16 Security Vulnerabilities in a Lovable-Showcased EdTech App With 18,000+ Users — Including Students From Top US Universities


In December 2025, Lovable closed a $330M Series B at a $6.6 billion valuation. The Swedish AI startup had crossed $200M ARR in just twelve months. Collins Dictionary named "vibe coding" the word of the year. 25 million projects. 500 million app visits. The message was clear: anyone can build software now.

And that's exactly the problem.

I spent a few hours poking at one of Lovable's own showcased applications — an AI-powered EdTech platform featured on lovable.dev with 100,000+ views and nearly 400 upvotes. The kind of app Lovable points to and says: look what you can build with us.

What I found was a masterclass in everything that can go wrong when AI writes your backend.


The App

I'm not naming it — responsible disclosure is still in progress. But here's what you need to know:

It's an AI-powered exam creation and grading platform. Teachers upload course material, the AI generates exams, students submit answers, the AI grades them. The founder built the product to solve a real problem he observed at a top US university.

The app is hosted on Lovable's own infrastructure (*.lovable.app), featured on their official showcase, and runs on the standard Lovable stack: React frontend, Supabase backend.

It has real users. A lot of them.

Around 18K+ users — that's how many records are exposed through an unauthenticated API endpoint that returns complete user data in paginated batches. No login required.

Teachers, students, university administrators. Email addresses, full names, user roles, account metadata. Users from UC Berkeley. UC Davis. Schools in Sweden, Spain, Belgium, Nigeria, Malaysia, the Philippines. K-12 institutions with minors likely on the platform.

All accessible to anyone with a browser and cURL.


The Findings: 16 Vulnerabilities, 6 Critical

Here's what a few hours of security testing uncovered:

The Core Bug: A SQL Logic Error That AI Would Never Catch

The platform uses Supabase RPC functions (server-side PostgreSQL functions) for sensitive operations. The access control logic — which might have slipped through AI code generation without proper review — was implemented like this:

IF auth.role() = 'authenticated' THEN
  RAISE EXCEPTION 'Access denied';
END IF;
        

Read that carefully. The logic says: if you're a logged-in user, deny access. The intent was to block non-admins. But here's what actually happens:

  • Authenticated users (role = 'authenticated'): Blocked. ✅
  • Anonymous users (role = 'anon'): Not 'authenticated', so... access granted.

This is backwards. The guard blocks the people it should allow and allows the people it should block. A classic logic inversion that a human security reviewer would catch in seconds — but an AI code generator, optimizing for "code that works," produced and deployed to production.

This single pattern was replicated across multiple critical functions.

What an Unauthenticated Attacker Could Do

Delete any user account — including every teacher, every student, every administrator. One API call per account. No authentication required. CVSS 9.8.

POST /rpc/[delete_function]
Header: apikey = [public_anon_key]
Body: { "target_email": "victim@university.edu" }
Response: 200 OK — account deleted, no authentication required
        

Grant themselves unlimited premium credits — the platform's monetization runs on a credit system. Any anonymous user could set any account's credits to any value. Or drain them to zero. CVSS 9.5.

Access every user record — a pagination RPC function exposes complete user data in batches. Name, email, role, sign-up date, credit balance. 18,697 records accessible. No authentication required. CVSS 9.1.

Send bulk emails through the platform's email infrastructure — the send-bulk-emails endpoint had zero authentication. A proof of concept confirmed that an unauthenticated attacker could send emails through the platform's infrastructure to any address. CVSS 8.6.

Grade real student submissions — the schedule-easy-grading endpoint accepts unauthenticated requests. A proof of concept confirmed that any anonymous user could trigger grading on real student submissions. CVSS 8.5.

Access all enterprise organizations — 14 enterprise clients exposed, including their admin emails, member lists, and organizational structure. Universities, international schools, and business academies across multiple countries.

Access all courses with active invite tokens — 481 courses exposed with valid join codes, meaning anyone could enroll themselves in any course.

And more. Exposed recording data. A publicly listable storage bucket. An OpenAPI schema that mapped every database table and function. System email templates readable by anyone.


The User Data

Let me be specific about the scale of the breach potential:

Category Count Total user records exposed 18,697 Unique email addresses 14,928 Professor accounts (all with emails) 4,538 Student accounts (all with emails) 10,505 Enterprise users (with full PII) 870

The top email domain? Gmail, with 12,547 accounts (67%). Then institutional domains from universities and schools across three continents.

And here's a detail that captures the vibe coding ethos perfectly: the app's landing page claims "35,000+ users." The endpoint exposes approximately 18,697 records. The marketing number appears to be hardcoded in the frontend — a vanity metric that was never connected to real data. A nice-looking number that nobody verified.


The Regulatory Exposure

This isn't hypothetical legal risk. These are real regulatory frameworks that apply to real users on this platform:

FERPA (US) — Student education records from UC-system universities were exposed. FERPA violations can result in the complete loss of federal funding for the institutions involved. The universities likely have no idea their students' data is accessible to anyone on the internet.

GDPR (EU) — Users from Sweden, Spain, Belgium, and other EU countries. Full PII exposure. Under GDPR, this could trigger fines of up to €20 million or 4% of global revenue.

COPPA (US) — While the platform doesn't explicitly target children, K-12 school domains are actively present in the user base. When a platform handles data from school environments where children under 13 may be present, COPPA obligations around verifiable parental consent and data protection come into question.

NDPR (Nigeria), PDPA (Malaysia), DPA (Philippines) — Users from all three jurisdictions, each with its own data protection requirements.

A single vibe-coded app, hosted on Lovable's infrastructure, potentially violating data protection laws across six or more jurisdictions. And it's being showcased as a success story.


Why This Matters Beyond One App

I'm not writing this to shame one developer. The founder identified a real problem and built a product that teachers actually use. That deserves respect.

But this is just one example.

A security researcher scanned 1,645 apps built with Lovable and found 170 of them had critical flaws. Personal data, API keys, and payment records were accessible to anyone. The vulnerability scored 9.3 out of 10 for severity. These tools produce code that works. They do not produce code that is secure. Hardcoded secrets, missing input validation, client-side authentication — it's all there if you look.

If you're building something that handles real customer data, AI-generated code is not a shortcut. It's a risk. There is no replacement for developers who understand the code they are shipping.

I'm writing this because this app is not an outlier. It's the expected outcome of the current vibe coding paradigm.

The Numbers

Research from December 2025 found 69 security vulnerabilities across just five vibe coding tools, including half a dozen critical flaws. The Veracode 2025 GenAI Code Security Report found that nearly 45% of AI-generated code contains security flaws. Palo Alto's Unit 42 has documented real-world breaches of vibe-coded applications. And Invicti's research shows that authentication and authorization failures are recurring and systemic in AI-generated code — not edge cases.

The Structural Problem

Lovable generates React + Supabase apps. Supabase provides Row Level Security (RLS) and role-based access — but these are features you have to consciously implement. An AI generating code from natural language prompts optimizes for "it works" — it creates functions that execute successfully, returns that look correct, and UIs that feel polished.

What it doesn't do is think adversarially. It doesn't ask: what happens if someone calls this function without logging in? It doesn't understand that auth.role() = 'authenticated' in a denial clause creates the opposite of the intended security boundary. It doesn't know that a pagination function without rate limiting is a full database dump waiting to happen.

The AI produces code that passes the vibe check. It does not produce code that passes a security audit.

The Showcase Problem

This is where Lovable's responsibility comes into play. When you feature an application on your official showcase — when you give it a product page on lovable.dev, when 100,000 people view it, when you hold it up as an example of what your platform can do — you are implicitly endorsing it.

Lovable's landing page currently says:

"Lovable generates complete web applications with databases, user authentication, and hosting included."

Authentication was included. It just didn't work.

I reported the issue to Lovable's support team. The ticket was closed without further follow-up.

If Lovable is going to market itself as a platform that generates production-ready apps with authentication "included," it bears some responsibility for the security posture of the apps it generates and promotes. You can't showcase an app to 100,000 people, host it on your own infrastructure, and then close the ticket when someone tells you it's leaking user data. At minimum, a basic security scan of showcased applications would have caught every critical finding in this report.


The Bigger Picture: From Vibe Coding to Vibe Hacking

The term "vibe hacking" is starting to appear in security circles, and it captures the dynamic perfectly. If non-technical people can now build full-stack apps by describing what they want, then non-technical people can also find and exploit vulnerabilities in those apps — because the vulnerabilities are predictable, systematic, and follow patterns that are inherent to how AI generates code.

You don't need to be a hacker to exploit a vibe-coded app. You need to understand one thing: AI-generated code defaults to functionality over security. If you know that, you know where to look.

Every Lovable app uses Supabase. Every Supabase project has a public anon key (it's in the frontend JavaScript). Every RPC function that doesn't explicitly deny anon access... allows it. The attack surface isn't hidden. It's documented.

This is not a sophisticated nation-state attack vector. This is "open your browser's developer tools and read the network requests." And that's what makes it dangerous — because the apps being built this way are handling real data, from real users, in regulated industries like education and healthcare.


What Needs to Change

For Lovable and similar platforms:

  • Implement automated security scanning for any app featured in your showcase
  • Add security guardrails to the code generation process — if the AI creates an RPC function, it should default to denying anonymous access
  • Provide security-focused templates and prompts, not just functional ones
  • Be transparent about the security limitations of AI-generated code

For vibe coders building production apps:

  • If your app handles user data, you need a security review. Period.
  • "It works" is not the same as "it's secure"
  • Supabase RLS is not optional. It's the minimum
  • The anon key is public. Every function callable with that key is your attack surface

For the industry:

  • We need "vibe security" tools as urgently as we need vibe coding tools
  • Regulatory bodies need to understand that AI-generated software is in production, handling sensitive data, today
  • The conversation about vibe coding can't just be about speed and accessibility. It has to include security and accountability


Disclosure

I've initiated responsible disclosure with the application's developer. This article intentionally omits the application's name, specific URLs, function names, and API endpoints to prevent exploitation while disclosure is in progress.

The full report documents all 16 findings with proof-of-concept demonstrations, CVSS scores, and a prioritized remediation roadmap. Most critical vulnerabilities can be fixed in under 5 minutes with a single SQL command.


About the Author

Taimur @ Volods

Volods is a product development and security consultancy working with startups and growth-stage companies in education, healthcare, and fintech. We build products, audit security, and help teams scale without leaving the doors unlocked.

If your app was built with AI and you've never had a security review, we should talk.

📧 taimur@volods.com | 🌐 volods.com


The irony of vibe coding is that it makes building software feel effortless — and that feeling of effortlessness is exactly what makes it dangerous. When you forget the code exists, you also forget the vulnerabilities exist. And they don't forget about you.

Thank you for the detailed article Taimur Khan. Not only it shows how vulnerable vibe coded apps could be, but also what to avoid if someone is on the same boat.

Like
Reply

Appreciate you digging this deep this is a powerful reminder that speed without security can quickly become a liability, especially when sensitive user data is involved. The examples you shared highlight how “vibe coding” still needs rigorous validation, not just generation.

Like
Reply

Only 170? So 90% of the AI-developed apps lack critical vulnerabilities. That’s amazing

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore content categories