title:: JavaScript SEO: Rendering, Hydration, and Crawlability for React, Next.js, and Vue description:: A developer's guide to JavaScript SEO covering server-side rendering, hydration pitfalls, crawlability for SPAs, and framework-specific solutions. focus_keyword:: JavaScript SEO guide category:: developers author:: Victor Valentine Romo date:: 2026.03.20
JavaScript SEO: Rendering, Hydration, and Crawlability for React, Next.js, and Vue
Quick Summary
- What this covers: javascript-seo-guide
- Who it's for: SEO practitioners at every career stage
- Key takeaway: Read the first section for the core framework, then use the specific tactics that match your situation.
JavaScript SEO is the practice of ensuring that search engine crawlers can discover, render, and index content generated by JavaScript frameworks. Googlebot uses a headless Chromium instance to render JavaScript, but the process introduces delays, resource constraints, and failure modes that don't exist with server-rendered HTML. If your site relies on client-side rendering for content that needs to rank, you have an SEO engineering problem.
The core challenge is render timing. Googlebot processes pages in two phases — crawl (fetches HTML) and render (executes JavaScript). The gap between these phases can range from seconds to days, and content that only exists after JavaScript execution is invisible during the crawl phase.
How Googlebot Processes JavaScript
The Two-Phase Indexing Pipeline
Phase one: Googlebot fetches the URL and receives the initial HTML response. This HTML is parsed immediately. Links discovered in this HTML are added to the crawl queue. Content present in this HTML is available for indexing.
Phase two: The fetched page enters the render queue. Googlebot's Web Rendering Service (WRS) executes JavaScript using a headless Chromium instance, producing the final DOM. Content that only appears after JavaScript execution becomes available for indexing at this point.
The problem: phase two is resource-constrained. Google maintains a rendering budget, and pages compete for rendering resources globally. High-profile sites get rendered quickly. Smaller sites may wait hours or days. During that gap, JavaScript-dependent content is invisible to search.
What Googlebot Can and Cannot Render
Googlebot renders most modern JavaScript, including React, Vue, Angular, and vanilla JS applications. It supports ES6+, async/await, Promises, Fetch API, and modern CSS. It does not support: WebSocket connections, service workers, IndexedDB in rendering context, or user interaction-dependent content (content that requires clicks, scrolls, or hovers to appear).Content behind authentication walls, lazy-loaded content that requires scroll events, infinite scroll implementations without pagination fallbacks, and content rendered only after user interaction will not be indexed.
Rendering Budget and Crawl Efficiency
Every page Googlebot renders consumes computational resources. Sites with thousands of pages face a practical constraint: not every page will be rendered promptly. Crawl budget — the number of URLs Googlebot will crawl per session — is separate from rendering budget, but both limit how much of your site gets processed.
Reducing JavaScript execution time, minimizing render-blocking resources, and serving complete HTML without JavaScript dependency directly improve how efficiently Googlebot processes your site.
Server-Side Rendering: The SEO Default
Why SSR Solves Most JavaScript SEO Problems
Server-side rendering generates complete HTML on the server before sending it to the client. When Googlebot fetches the page, the HTML already contains all content, links, and metadata. No rendering queue. No JavaScript execution delay. No content visibility gap.
Next.js (React), Nuxt.js (Vue), and Angular Universal all provide SSR capabilities. For SEO-critical pages — landing pages, blog posts, product pages, category pages — SSR eliminates the primary technical risk.SSR Implementation in Next.js
Next.js provides three rendering strategies relevant to SEO:getServerSideProps — renders on every request. Use for pages with frequently changing content where freshness matters for search.
getStaticProps — renders at build time, optionally with Incremental Static Regeneration (ISR). Use for content that changes infrequently — blog posts, documentation, product descriptions. ISR lets you set a revalidation interval without rebuilding the entire site.
generateStaticParams (App Router) — the App Router equivalent for generating static pages from dynamic routes.
For SEO purposes, static generation with ISR is the optimal strategy for most content: fast server response times, complete HTML on first request, and automated content freshness.
SSR Implementation in Nuxt.js
Nuxt 3 defaults to universal rendering — SSR on first request with client-side navigation afterward. For SEO-critical pages, ensure thessr: true configuration is active (it's the default). Use useAsyncData or useFetch composables to load data server-side.
Static site generation via nuxt generate produces fully pre-rendered HTML files. This approach works for content sites but requires rebuilds when content changes.
SSR Implementation in Vue with Vite
For Vue applications not using Nuxt, Vite's SSR support requires manual configuration. Create an entry point for server rendering that produces HTML strings, and a separate client entry for hydration. The complexity is significant compared to Nuxt's built-in SSR — use Nuxt unless you have specific reasons not to.
Hydration Pitfalls That Break SEO
What Hydration Does
Hydration attaches JavaScript event handlers to server-rendered HTML, making static markup interactive. The server sends complete HTML. The browser renders it immediately (fast first paint). Then JavaScript loads and "hydrates" the existing DOM rather than rebuilding it.
Hydration Mismatch Errors
When the server-rendered HTML doesn't match what the client-side JavaScript produces, React and Vue throw hydration mismatch errors. The framework may discard the server-rendered DOM and re-render from scratch, producing a flash of incorrect content or layout shift.
For SEO, hydration mismatches create two problems: Googlebot may index the server-rendered content that differs from the hydrated content, and layout shift during hydration degrades Core Web Vitals scores.
Common causes: date/time formatting that differs between server and client timezones, browser-specific APIs called during server render, conditional rendering based on window or document objects, and random ID generation that produces different values on server and client.
Fixing Hydration Issues
Wrap browser-only code in client checks. In React: use useEffect for browser-only logic. In Vue: use onMounted or the component in Nuxt. In Next.js: use dynamic imports with ssr: false for components that cannot render on the server.
Test hydration by comparing the server-rendered HTML (view-source:) against the rendered DOM (DevTools Elements panel). Any differences indicate hydration mismatches that need resolution.
Client-Side Rendering: When Rankings Suffer
The CSR Problem for SEO
Single-page applications that render entirely on the client send a minimal HTML shell — typically a — with a JavaScript bundle that builds the DOM after loading. Googlebot must render this JavaScript to see any content.
For small sites with high authority, Googlebot renders CSR pages quickly enough that the delay is manageable. For large sites, new sites, or sites with complex JavaScript, CSR creates indexation gaps that range from annoying to catastrophic.
Diagnosing CSR Indexation Problems
In Google Search Console, use the URL Inspection tool. Compare "Rendered HTML" against "Crawled HTML." If the crawled HTML contains no meaningful content but the rendered HTML does, your content depends entirely on JavaScript rendering.
Test with JavaScript disabled in your browser. If the page displays nothing — no headlines, no body text, no navigation — search engines see the same nothing during their initial crawl pass.
Migrating from CSR to SSR
The migration path depends on your framework:
React SPA to Next.js: Extract route-level data fetching intogetServerSideProps or getStaticProps. Replace client-side routing with Next.js file-based routing. The migration is incremental — start with your highest-traffic pages.
Vue SPA to Nuxt: Move route components into the pages/ directory. Replace Vuex/Pinia client-side data loading with useAsyncData. Nuxt's migration guide covers the most common patterns.
Angular SPA to Angular Universal: Add @nguniversal/express-engine. Configure server-side module alongside client module. The architectural changes are more invasive than React or Vue equivalents.
Dynamic Rendering: The Compromise
What Dynamic Rendering Is
Dynamic rendering serves pre-rendered HTML to search engine crawlers while serving the standard JavaScript application to users. A server-side component detects the user agent, and if it matches known crawler patterns (Googlebot, Bingbot, etc.), returns a pre-rendered HTML version.
Google has acknowledged dynamic rendering as a valid approach, though they recommend SSR as the preferred long-term solution. Dynamic rendering adds infrastructure complexity (a rendering service like Rendertron or Puppeteer) and introduces a maintenance burden: two versions of every page must produce identical content.When Dynamic Rendering Makes Sense
Large SPAs where SSR migration is prohibitively expensive. Applications with complex client-side state that's difficult to reproduce server-side. Hybrid applications where some routes need SSR and others don't justify the investment.
Implementation with Puppeteer or Rendertron
Deploy Rendertron (Google's open-source rendering service) or a custom Puppeteer instance. Configure your web server or CDN to route crawler requests to the rendering service. Cache rendered HTML to avoid re-rendering on every crawl request.
The caching strategy matters: stale cache serves outdated content to crawlers, which creates indexation drift. Set cache TTLs that match your content freshness requirements — 24 hours for dynamic content, 7 days for evergreen content.
Structured Data in JavaScript Applications
Injecting JSON-LD Dynamically
Structured data should appear in the initial HTML response, not be injected by client-side JavaScript. When using SSR, include JSON-LD tags in the server-rendered .
In Next.js, use the Head component or the App Router's metadata API. In Nuxt, use useHead composable. Both inject structured data into the server-rendered HTML before it reaches the client.
Validating Schema After Hydration
Test structured data using Google's Rich Results Test, which renders JavaScript before analyzing. Also test with JavaScript disabled to verify the structured data exists in the raw HTML response.
Internal Linking in SPAs
The Problem with Client-Side Navigation
SPAs use client-side routing — history.pushState or hash-based routing — instead of traditional links. Googlebot can follow many client-side routes, but the discovery is less reliable than standard HTML links.
The Fix: Semantic HTML Links with Client-Side Enhancement
Use standard tags with valid href attributes for all internal links. Let the framework intercept these clicks for client-side navigation, but ensure the underlying HTML contains crawlable links. Next.js , Nuxt , and React Router all produce semantic HTML links by default.
Verify by disabling JavaScript and clicking links. If navigation still works (because the links are real HTML), crawlers can follow them.
Lazy Loading and SEO
Images and Media
Lazy loading images improves page performance by deferring off-screen image downloads. Modern browsers support native lazy loading through the loading="lazy" attribute. For the LCP image (hero/above-the-fold), never use lazy loading — the LCP element must load immediately.
<!-- LCP image — load eagerly -->
<img src="/hero.webp" alt="description" width="1200" height="630" fetchpriority="high">
<!-- Below-fold images — lazy load --> <img src="/photo.webp" alt="description" width="800" height="450" loading="lazy">
Googlebot does not scroll pages. Content rendered through scroll-triggered lazy loading (Intersection Observer without SSR fallback) is invisible to crawlers. For text content that must be indexed, ensure it exists in the initial HTML response. Lazy loading is appropriate for images and media, not for body content that search engines need to read.
Infinite Scroll and Pagination
Infinite scroll implementations that load content only on scroll events prevent Googlebot from discovering content below the initial viewport. The solution: implement paginated URLs alongside infinite scroll.
Each batch of content should correspond to a distinct URL (/blog/page/2, /blog/page/3). The infinite scroll experience loads these pages dynamically for users, while Googlebot discovers each page through standard pagination links.
Include rel="next" and rel="prev" link elements in the to signal the pagination relationship:
<link rel="prev" href="/blog/page/1">
<link rel="next" href="/blog/page/3">
Route-Based Code Splitting
Modern frameworks split JavaScript bundles by route. This improves performance (smaller initial downloads) but can introduce SEO complications if route chunks fail to load during Googlebot's render phase.
Verify that route code splitting doesn't break SSR: each route should render completely with server-side generated HTML, and the client-side code split chunk enhances interactivity rather than providing content.
Test by viewing page source for each route and confirming that the full content appears in the HTML response, not just a loading skeleton that client JavaScript fills.
Common JavaScript SEO Mistakes
Mistake 1: Rendering Critical Content in useEffect
ReactuseEffect runs only on the client. Content generated inside useEffect doesn't exist in the server-rendered HTML. Googlebot may see it after client-side rendering, but with a delay and lower reliability than server-rendered content.
Move all SEO-critical content (headings, body text, structured data) out of useEffect and into the component render function or getServerSideProps/getStaticProps.
Mistake 2: Blocking Googlebot's JavaScript Files via robots.txt
If your robots.txt blocks CSS or JavaScript files that Googlebot needs to render the page, the rendered page will look broken to the crawler. Never block framework JavaScript bundles, CSS files, or critical third-party scripts in robots.txt.
Verify by using Google Search Console's URL Inspection tool and checking the "Rendered page" screenshot. If the screenshot looks broken, resource blocking is likely the cause.
Mistake 3: Using Hash-Based Routing
Hash fragment URLs (example.com/#/page) are not sent to the server — the browser handles them client-side. Googlebot treats everything after the hash as a fragment identifier and ignores it. A site using hash-based routing has effectively one URL for all pages in Google's view.
Migrate to history.pushState routing (React Router with BrowserRouter, Vue Router with history mode). This creates real URL paths that servers can respond to and Googlebot can crawl.
Mistake 4: Client-Side Redirect Chains
JavaScript-based redirects (window.location.href = '/new-page') don't transfer SEO signals as reliably as server-side 301 redirects. If your SPA handles redirects client-side during route changes, Googlebot may not follow the redirect chain or may experience delayed processing.
Implement redirects at the server/CDN level for all SEO-relevant URL changes. Reserve client-side redirects for in-app navigation that doesn't need search engine discovery.
Testing JavaScript SEO
Automated Testing in CI/CD
Add SEO checks to your CI/CD pipeline. For every page route, verify:
- Server-rendered HTML contains the
tag, meta description, and - JSON-LD structured data is present in the raw HTML response
- Internal links use
tags with validhrefattributes - No hydration mismatch warnings in the console
- Core Web Vitals scores are within budget
Manual Testing Checklist
- View page source (not DevTools Elements) — does the HTML contain visible content?
- Disable JavaScript — does the page display meaningful content?
- Google Search Console URL Inspection — does the rendered page match expectations?
- Google Rich Results Test — is structured data valid?
- Mobile device testing — does content render properly on constrained devices?
Frequently Asked Questions
Does Google fully render JavaScript now?
Google's WRS renders JavaScript using a recent Chromium version, supporting modern ES6+ syntax, Promises, async/await, and Fetch API. However, rendering is delayed (seconds to days) compared to immediate HTML parsing. Content in the initial HTML response is always indexed faster and more reliably than JavaScript-rendered content.Should I use SSR or SSG for SEO?
Use Static Site Generation (SSG) for content that changes infrequently — blog posts, documentation, product pages. Use Server-Side Rendering (SSR) for content that changes frequently — search results pages, user-generated content listings, real-time data. SSG produces faster response times; SSR produces fresher content.
Is React worse for SEO than Vue or Angular?
No framework is inherently worse for SEO. The rendering strategy matters, not the framework. A React application with proper SSR via Next.js performs identically to a Vue application with Nuxt SSR or an Angular application with Universal. The failure mode is client-side-only rendering, regardless of framework.
How do I handle lazy-loaded content for SEO?
Content that loads on scroll events is invisible to Googlebot because the crawler doesn't scroll. Use intersection observer with an SSR fallback: render the content in the initial HTML (visible to crawlers) and lazy-load the interactive version for users. Alternatively, ensure all critical content loads without user interaction.
Do I need dynamic rendering if I already have SSR?
No. Dynamic rendering is a workaround for applications that cannot implement SSR. If your application already serves complete HTML via SSR, dynamic rendering adds unnecessary complexity. The two approaches solve the same problem — making content visible without client-side JavaScript execution.
When This Approach Isn't Right
This guidance may not fit if:
- You're brand new to SEO. Some frameworks here assume working knowledge of crawling, indexing, and ranking fundamentals. Start with the basics first — this article builds on them.
- Your site has fewer than 50 indexed pages. Some strategies (like cannibalization audits or hub-and-spoke restructuring) require a minimum content base. Focus on content creation before optimization.
- You're working on a site with active penalties. Manual actions require a different playbook. Resolve the penalty first, then apply these optimization frameworks.