Blog
Wild & Free Tools

Robots.txt in Next.js and React: Setup Guide

Last updated: April 2026 6 min read

Table of Contents

  1. Next.js App Router (robots.ts)
  2. Next.js Pages Router (static file)
  3. Pure React (CRA / Vite)
  4. Environment-aware robots.txt
  5. Validate after deploy
  6. Frequently Asked Questions

React and Next.js handle robots.txt differently than traditional sites. There's no static file you just drop in a folder and call it done — or rather, there is, but it's not always the right approach. Here's how to set it up properly depending on which framework and routing model you're using.

Next.js App Router: Use robots.ts for Dynamic Generation

In Next.js 13+ with the App Router, the cleanest approach is creating a robots.ts (or robots.js) file in your app/ directory. Next.js automatically serves it at /robots.txt:

// app/robots.ts
import { MetadataRoute } from 'next'

export default function robots(): MetadataRoute.Robots {
  return {
    rules: {
      userAgent: '*',
      allow: '/',
      disallow: ['/admin/', '/api/', '/account/'],
    },
    sitemap: 'https://yoursite.com/sitemap.xml',
  }
}

This outputs a valid robots.txt file at yoursite.com/robots.txt. The advantage is that you can pull environment variables into it — useful if you want to disallow everything on staging environments and allow everything on production.

Next.js Pages Router: Static File in /public

If you're using the Pages Router (pre-App Router), create a static file at public/robots.txt. Next.js serves everything in the public folder at the root URL, so public/robots.txt becomes yoursite.com/robots.txt automatically.

User-agent: *
Disallow: /admin/
Disallow: /api/
Allow: /api/public/

Sitemap: https://yoursite.com/sitemap.xml

This is simple and reliable. The downside is it's static — same content in every environment unless you have a build step that modifies it. For most production sites that's fine.

Sell Custom Apparel — We Handle Printing & Free Shipping

Create React App and Vite: Put It in the Public Folder

For non-Next.js React apps (Create React App, Vite, etc.), the approach is the same as Pages Router: drop a static robots.txt file in the public/ folder. The build process copies everything in public to the output directory, and your web server (or CDN) serves it at /robots.txt.

One issue: React SPAs often have a single-page architecture where all routes are handled client-side. This means Google technically only sees one HTML file at the root URL. For SEO on React SPAs, robots.txt configuration is less critical than ensuring your pages are server-side rendered or statically generated — robots.txt can't fix the underlying crawlability issue of a JavaScript-only app.

If your React app has meaningful SEO requirements, move to Next.js or use static site generation. A proper robots.txt won't help Google crawl content it can't execute JavaScript to see.

Block Staging Environments with Environment-Aware Robots.txt

A common request: block all crawlers on staging, allow on production. In Next.js App Router, you can do this cleanly:

// app/robots.ts
export default function robots() {
  const isProduction = process.env.NODE_ENV === 'production'
  return {
    rules: {
      userAgent: '*',
      disallow: isProduction ? ['/admin/', '/account/'] : ['/'],
    },
    sitemap: isProduction
      ? 'https://yoursite.com/sitemap.xml'
      : undefined,
  }
}

On staging, this outputs Disallow: / — blocking everything. On production, it outputs your real rules. Set NODE_ENV correctly in each environment and this handles itself.

For static robots.txt files, use a build-time script or an environment-specific file swap (robots.staging.txt, robots.production.txt) in your CI/CD pipeline instead.

Always Validate After Deploying

After any robots.txt change in a Next.js or React app, do three checks before moving on:

A common mistake is updating the robots.ts file but deploying to an environment where the old static file still exists, or vice versa. Check the live URL directly — don't assume the code change made it through correctly.

Try It Free — No Signup Required

Runs 100% in your browser. No data is collected, stored, or sent anywhere.

Open Free Robots.txt Generator

Frequently Asked Questions

Does Next.js create a robots.txt automatically?

Only if you add a robots.ts file in the app/ directory (App Router) or a robots.txt file in the public/ directory (Pages Router). There is no auto-generated default.

Where exactly does robots.txt go in a Next.js project?

App Router: app/robots.ts. Pages Router: public/robots.txt. Both result in the file being served at /robots.txt.

Can I use middleware to generate robots.txt in Next.js?

Yes, but robots.ts is simpler and the recommended approach. Middleware works if you need request-specific logic, but for most cases robots.ts is sufficient.

Does a React SPA need a robots.txt?

Yes, even if your SEO is limited. It establishes baseline crawl rules and includes your sitemap reference. But robots.txt won't fix SEO problems caused by client-side-only rendering.

How do I test my Next.js robots.txt before deploying?

Run the dev server and visit localhost:3000/robots.txt. The output should match what you'd see in production. For environment-specific rules, temporarily set the env variable and test again.

Launch Your Own Clothing Brand — No Inventory, No Risk