Back to blog
·7 min read

How to Secure AI-Generated Code for Next.js and Supabase: A Production Checklist

AI assistants like Claude Code are fast, but they introduce security risks developers often miss. Here's a concrete checklist to catch vulnerabilities before they hit production.

The Real Cost of Shipping AI-Generated Code

You're building a Next.js app with Supabase, and Claude Code generates the authentication flow in minutes. It compiles. Tests pass. You deploy to production.

Three weeks later, your security team flags a SQL injection vulnerability in the query builder. A customer's data was exposed because the code looked correct but had subtle flaws in parameter handling.

This isn't hypothetical. In nearly 50% of cases, AI assistants introduce known vulnerabilities directly into codebases—from SQL injection to improper authentication checks to exposed API keys. The problem isn't that AI is bad at coding. It's that AI-generated code looks correct just long enough to bypass normal review processes.

Developers treat AI-generated code like code written by a junior engineer, but that's inaccurate. It's more like code written by someone who knows the syntax perfectly but sometimes hallucinates security details or uses deprecated patterns. You need structured security review practices specifically for AI output.

Why AI-Generated Code Fails Security Review

Claude Code and similar tools excel at pattern matching and syntax, but they struggle with:

  • Context about your specific data sensitivity (what's public vs. private)
  • Security implications of library versions you're using
  • Row-level security requirements in Supabase
  • Proper error handling that doesn't leak information
  • Authentication state management in server vs. client components
  • Rate limiting and throttling logic
  • The tools work great for scaffolding. They fail when security decisions require domain knowledge about your application.

    The Production Checklist: Before You Deploy AI-Generated Code

    Here's a practical workflow to catch vulnerabilities before production:

    ### 1. Verify Authentication and Authorization Logic

    Ask yourself: Does this code enforce who can access what?

    Check these patterns:

  • Is the auth token validated on every protected route?
  • Are you using Supabase's session management correctly (not storing tokens in localStorage without HTTPS)?
  • Is role-based access control (RBAC) implemented in your Supabase policies, not just frontend checks?
  • Are you validating user identity server-side before operations like deletes or updates?
  • Example: If Claude Code generates an API route that deletes a user's record, verify that the route checks the current session:

    ```javascript

    export async function DELETE(req) {

    const supabase = createClient()

    const { data: { user } } = await supabase.auth.getUser()

    if (!user) {

    return Response.json({ error: 'Unauthorized' }, { status: 401 })

    }

    // Only delete the current user's own data

    const { error } = await supabase

    .from('profiles')

    .delete()

    .eq('id', user.id)

    return Response.json({ success: !error })

    }

    ```

    If the route is missing the user check or deletes based on query parameters without validation, reject it.

    ### 2. Enable Row Level Security (RLS) on All Tables

    This is the most commonly missed security configuration.

    By default, Supabase tables are open—anyone with your anon key can read, modify, or delete all data. RLS changes this: only code that matches your policies can touch the data.

    Before deploying:

  • Go to Supabase dashboard, click each table, enable RLS
  • Define policies that match your auth model (e.g., "users can only see their own records")
  • Test policies in Supabase's policy editor before using them in code
  • Claude Code often generates tables without RLS policies. This creates a security hole that works fine in development but exposes production data.

    ### 3. Audit Data Handling and Leakage

    AI-generated code often leaks information unintentionally:

  • Error messages that expose database structure
  • API responses that include sensitive fields
  • Logs that contain user data or tokens
  • Query results that include unfiltered columns
  • Review every API endpoint Claude Code generates:

  • Does it return only the fields the client needs?
  • Are error messages generic (not revealing database schema)?
  • Are timestamps or IDs from unrelated records included in responses?
  • Example of a leaky endpoint:

    ```javascript

    // BAD: Returns full user object including password hash, email

    const user = await supabase

    .from('users')

    .select('*')

    .eq('id', userId)

    // GOOD: Returns only safe fields

    const user = await supabase

    .from('users')

    .select('id, name, avatar_url')

    .eq('id', userId)

    ```

    ### 4. Check for SQL Injection and Parameter Binding

    This is where AI hallucinations often appear. Even when using parameterized queries (which Supabase handles), AI might construct queries that are vulnerable:

  • Does the code use string concatenation for query filters?
  • Are user inputs directly passed to database calls without validation?
  • Are you using Supabase's built-in methods (which handle binding) or raw SQL?
  • If Claude Code uses raw SQL, verify every variable is parameterized:

    ```javascript

    // SAFE: Supabase handles parameterization

    const results = await supabase

    .from('posts')

    .select()

    .eq('title', searchTerm)

    // RISKY: If using raw SQL, ensure parameterization

    const results = await supabase.rpc('search_posts', { search_term: searchTerm })

    ```

    ### 5. Validate Environment Variables and Secrets

    AI-generated code sometimes commits secrets or stores them insecurely:

  • Are API keys hardcoded anywhere?
  • Are environment variables being logged?
  • Is the Supabase service role key (never use in client-side code) accidentally exposed?
  • Are you using different keys for development vs. production?
  • Check your .env and .env.local files—AI might have generated code that expects variables to be set, but doesn't document which are sensitive.

    ### 6. Test Rate Limiting and DDoS Protection

    AI often skips this:

  • Are API routes rate-limited to prevent brute force attacks?
  • Is user input validated before expensive database operations?
  • Can a single request trigger hundreds of downstream queries (N+1 problem)?
  • Add rate limiting middleware or use Next.js built-in patterns:

    ```javascript

    import { Ratelimit } from '@upstash/ratelimit'

    import { Redis } from '@upstash/redis'

    const ratelimit = new Ratelimit({

    redis: Redis.fromEnv(),

    limiter: Ratelimit.slidingWindow(10, '1 h'),

    })

    export async function POST(req) {

    const { success } = await ratelimit.limit(req.ip)

    if (!success) return Response.json({ error: 'Too many requests' }, { status: 429 })

    // Handle request

    }

    ```

    ### 7. Review Dependency Versions

    AI code sometimes uses outdated or vulnerable library versions:

  • Run `npm audit` on generated code
  • Check if authentication libraries are up-to-date
  • Verify Supabase client version matches your API version
  • Building Security Into Your AI Workflow

    Rather than treating security as a post-generation step, structure your Claude Code prompts to request secure patterns:

  • Ask for Row Level Security policies alongside table definitions
  • Request rate limiting and input validation in API endpoints
  • Specify which data fields should be excluded from responses
  • Ask for environment variable documentation
  • When you're scaffolding larger systems—especially full-stack apps combining Next.js, Supabase, and authentication—tools like ZipBuild can help generate security-aware boilerplates that include RLS policies, environment variable templates, and auth patterns pre-configured. This gives Claude Code a better foundation to build on.

    The Bottom Line

    AI-generated code is production-ready for scaffolding and structure, not for security assumptions. Treat every generated authentication, authorization, or data-handling component as a security review task, not a feature to ship.

    Use this checklist before every production deployment involving AI-generated code:

  • Auth and authorization enforced server-side?
  • Row Level Security enabled and tested?
  • Data fields filtered in responses?
  • Parameters properly bound (no SQL injection)?
  • Secrets secure and not logged?
  • Rate limiting configured?
  • Dependencies up-to-date and audited?
  • One missing check can expose your users' data. The checklist takes 15 minutes. Do it.

    Try the free discovery chat at zipbuild.dev to explore how to build security-aware SaaS scaffolds from the ground up.

    Written by ZipBuild Team

    Ready to build with structure?

    Try the free discovery chat and see how ZipBuild architects your idea.

    Start Building