Blog
Wild & Free Tools

Crawl-Delay in Robots.txt: What It Actually Does

Last updated: April 2026 5 min read

Table of Contents

  1. What crawl-delay does
  2. Google ignores crawl-delay
  3. Which bots respect crawl-delay
  4. When crawl-delay is useful
  5. Crawl-delay vs crawl budget
  6. Frequently Asked Questions

Crawl-delay is a robots.txt directive that tells web crawlers to wait a set number of seconds between requests. It's supposed to reduce server load from aggressive crawlers. There's a major catch: Google doesn't honor it. Here's what crawl-delay actually does, which bots respect it, and when it's worth adding to your robots.txt.

How Crawl-Delay Works

The crawl-delay directive specifies a minimum number of seconds a crawler should wait between fetching pages from your site:

User-agent: *
Crawl-delay: 10

This tells crawlers to wait at least 10 seconds between each request. A value of 1 means one request per second. A value of 30 means one request every 30 seconds. It's a throttle designed to prevent aggressive bots from overloading your server.

The directive goes inside a User-agent block. You can set different delays for different bots, or use the wildcard (*) to apply a default to all crawlers:

User-agent: Bingbot
Crawl-delay: 5

User-agent: *
Crawl-delay: 10

Why Google Ignores Crawl-Delay

Google has explicitly stated that Googlebot does not support the crawl-delay directive. If you want to control how often Googlebot crawls your site, you need to use Google Search Console — specifically the "Crawl rate" setting under Settings.

This is a significant limitation. Google is often the highest-volume crawler for most sites. If your concern is server load from Google's crawler specifically, robots.txt crawl-delay does nothing.

The Google Search Console crawl rate setting lets you reduce Googlebot's crawl frequency temporarily (for up to 90 days). It's the right tool for managing Google crawl load. Note that restricting crawl rate can slow down how quickly new content gets indexed.

Sell Custom Apparel — We Handle Printing & Free Shipping

Which Crawlers Honor Crawl-Delay

Crawl-delay support varies by crawler:

CrawlerRespects Crawl-Delay?
GooglebotNo — use Search Console instead
BingbotYes
YandexYes
DuckDuckBotYes
Baidu SpiderPartial
SEO crawlers (Ahrefs, Semrush)Most do
Rogue scrapersNo

The crawlers that respect it tend to be the well-behaved, legitimate ones. The problematic aggressive scrapers usually ignore it. This limits the practical value of crawl-delay for protecting server resources from bad actors.

When Crawl-Delay Is Actually Worth Using

Crawl-delay has real utility in a few specific scenarios:

Bing traffic on shared hosting: If your site runs on shared hosting with limited server resources and Bingbot crawls aggressively, setting Crawl-delay: 5 for Bingbot can reduce server strain without blocking it entirely.

Legitimate SEO crawlers: Tools like Ahrefs and Semrush honor crawl-delay. If you're running a site where you've noticed SEO tool crawlers causing load spikes, crawl-delay helps.

Staging environments: On staging sites, you might add a high crawl-delay to slow down any crawler that ignores your Disallow: / rule (legitimate crawlers will honor either the delay or the Disallow).

For production sites concerned about server load from Google specifically, Skip the crawl-delay directive and use Search Console's crawl rate setting instead.

Crawl-Delay vs. Crawl Budget — Not the Same Thing

Crawl budget is the total number of pages Google will crawl on your site in a given time period. It's determined by your site's quality, size, and server performance — not by crawl-delay.

Crawl-delay affects crawl frequency (how fast a crawler makes requests). Crawl budget affects crawl coverage (how many pages get crawled). These are separate concerns with separate solutions.

If you're worried about crawl budget — for example, Google isn't crawling your new pages fast enough — the solution is improving site speed, reducing duplicate content, and ensuring your important pages are well-linked. Adding a crawl-delay actually makes crawl budget problems worse by slowing down how quickly Googlebot processes your site.

Try It Free — No Signup Required

Runs 100% in your browser. No data is collected, stored, or sent anywhere.

Open Free Robots.txt Generator

Frequently Asked Questions

What value should I set for crawl-delay?

For Bingbot and cooperative crawlers on most hosting: 1-5 seconds. On shared hosting with limited resources: 5-10. Values over 30 will significantly slow indexing and are rarely justified.

Can crawl-delay improve my Google rankings?

No. Google ignores it, and it has no direct ranking effect. Slower crawling would if anything delay new content from being indexed, which is a negative.

What's the maximum crawl-delay value?

There's no official maximum, but very high values (60+) are generally ignored or treated as "crawl as infrequently as possible" by supporting crawlers. Practical maximum is around 30 seconds.

Does crawl-delay stop aggressive scrapers?

No. Malicious scrapers don't follow robots.txt at all. Crawl-delay only affects crawlers that choose to respect it. For actual scraper protection, use rate limiting at the server or CDN level.

Can I set different crawl-delays for different sections of my site?

No. Crawl-delay is set per User-agent, not per path. You can't say "crawl /api/ slowly but /blog/ fast" — it applies to the entire site for that user-agent.

Launch Your Own Clothing Brand — No Inventory, No Risk