Security Review Checklist for AI Code

Module 08: Security & Code Quality | Expansion Guide

Back to Module 08

The Problem

AI generates a login endpoint. Looks great. Runs perfectly. Ships to production. Three days later: your database is leaked. The SQL injection was obvious in hindsight, but AI treated user input like trusted data.

The core issue: AI learns from code examples that include security vulnerabilities.

Stack Overflow, GitHub, tutorials - many contain insecure patterns. AI statistically reproduces these patterns. It doesn't understand threat models, it generates "typical" code. Unfortunately, typical code is often vulnerable code.

The Core Insight

Security review must be systematic, not intuitive.

You can't just "look for issues" - you'll miss things. You need a checklist that forces you to examine every attack surface. The goal isn't perfection, it's reducing the attack surface to acceptable risk levels.

Think like a penetration tester: assume every input is malicious, every database query is a potential injection point, every API call is a chance for exploitation.

The Security Review Checklist

Category 1: Input Validation & Sanitization

Check What to Verify Risk if Missing
User Input Validation All inputs validated against schema/regex XSS, SQL Injection, Command Injection
Type Checking Inputs coerced to expected types Type confusion attacks
Length Limits Maximum lengths enforced DoS, buffer overflow
Whitelist vs Blacklist Uses whitelist of allowed values Bypass via unexpected inputs
File Upload Validation File type, size, content verified Malicious file execution

Code Review Questions:

AI's Validation Blindspot

AI often validates "happy path" inputs (email format) but misses adversarial inputs (SQL keywords, XSS payloads). Always test with OWASP attack strings.

Category 2: Authentication & Authorization

Authentication Checks:

Authorization Checks:

# Bad: AI often generates
app.get('/user/:id', async (req, res) => {
    const user = await User.findById(req.params.id);
    res.json(user); // Anyone can view any user!
});

# Good: Authorization check
app.get('/user/:id', authMiddleware, async (req, res) => {
    const requestedId = req.params.id;
    const currentUser = req.user;

    // Users can only view their own data unless admin
    if (requestedId !== currentUser.id && !currentUser.isAdmin) {
        return res.status(403).json({ error: 'Forbidden' });
    }

    const user = await User.findById(requestedId);
    res.json(user);
});

Category 3: Data Protection

Encryption Checks:

Data Exposure Checks:

Category 4: Injection Attacks

SQL Injection Prevention:

# Bad: String concatenation
query = f"SELECT * FROM users WHERE email = '{email}'"

# Good: Parameterized queries
query = "SELECT * FROM users WHERE email = ?"
db.execute(query, [email])

Other Injection Types:

Category 5: Cross-Site Scripting (XSS)

XSS Prevention Checklist:

// Bad: Direct injection
element.innerHTML = userInput;

// Good: Escaped rendering
element.textContent = userInput;

// If HTML needed: Sanitize first
import DOMPurify from 'dompurify';
element.innerHTML = DOMPurify.sanitize(userInput);

Category 6: API Security

API-Specific Checks:

The Mass Assignment Trap

# Bad: AI often generates
app.patch('/user/:id', async (req, res) => {
    await User.update(req.params.id, req.body);
    // User can update ANY field, including isAdmin!
});

# Good: Explicit field allowlist
app.patch('/user/:id', async (req, res) => {
    const allowedFields = ['name', 'email', 'bio'];
    const updates = {};
    for (const field of allowedFields) {
        if (req.body[field] !== undefined) {
            updates[field] = req.body[field];
        }
    }
    await User.update(req.params.id, updates);
});

Category 7: Dependency Security

Third-Party Code Checks:

The Walkthrough: Security Audit Process

Step 1: Automated Scanning (5 minutes)

# Run security linters
npm audit
npm run lint:security  # ESLint security plugin
snyk test

# Static analysis
semgrep --config=auto

# Dependency check
npm outdated

Step 2: Manual Code Review (15-30 minutes)

Go through the checklist above, focusing on:

  1. All user input entry points
  2. All database queries
  3. All authentication/authorization logic
  4. All API endpoints

Step 3: Test with Attack Vectors (10 minutes)

# SQL Injection test
curl -X POST /login -d "email=admin'--&password=x"

# XSS test
curl -X POST /comment -d "text="

# IDOR test
# Login as user A, try to access user B's resources

# Rate limit test
# Send 100 requests in 1 second, should be blocked

Failure Patterns

1. Trust AI Without Verification

Symptom: AI said "secure," so you ship it.

Fix: Run automated scanners. AI doesn't know your threat model.

2. Fixing Symptoms, Not Root Cause

Symptom: You escape one XSS payload, but miss the pattern.

Fix: If one SQL injection exists, audit ALL queries. Patterns repeat.

3. Security Theater

Symptom: You validate email format but allow SQL keywords.

Fix: Think like an attacker. What's the actual risk?

Quick Reference

30-Second Security Scan:

Critical Security Rules:

  1. Never trust user input (validate everything)
  2. Use parameterized queries (never string concatenation)
  3. Check authorization on every endpoint
  4. Hash passwords (bcrypt with 12+ rounds)
  5. Escape output (prevent XSS)
  6. Rate limit authentication endpoints
  7. Keep dependencies updated

AI Security Audit Prompt:

"Review this code for security vulnerabilities:
1. SQL injection via string concatenation
2. XSS via unsanitized rendering
3. Missing authentication/authorization checks
4. Secrets hardcoded in source
5. Unsafe deserialization
6. Command injection risks
7. Timing attacks in authentication

List every vulnerability found with severity."

Fast Fail Criteria (Reject Immediately):