All Tools

Gecko Robots.txt Generator

Build a valid robots.txt file with crawl rules, sitemaps, and common presets. Preview live, then copy or download.

Quick Presets

Block /admin/ Block /api/ Block /private/ Block /tmp/ Block /cgi-bin/ Allow all Block all

Crawl Rules

Sitemap URLs

robots.txt Preview

Create a robots.txt file for your website in seconds. Choose your user-agent, add allow/disallow rules with common presets, include sitemap URLs, and set a crawl-delay. Preview the output live, then copy or download the file. No signup, no server processing — everything runs in your browser.

What is a robots.txt file?

A robots.txt file tells search engine crawlers which pages or sections of your site they can or cannot request. It sits at the root of your domain (e.g., example.com/robots.txt) and is one of the first files crawlers check before indexing your site.

Does robots.txt block pages from appearing in Google?

Not exactly. Robots.txt tells crawlers not to crawl a page, but if other sites link to that page, Google may still index the URL (without content). To fully prevent indexing, use a noindex meta tag or X-Robots-Tag header instead.

What is crawl-delay in robots.txt?

Crawl-delay tells bots to wait a specified number of seconds between requests. This can reduce server load from aggressive crawlers. Note: Googlebot ignores crawl-delay — use Google Search Console's crawl rate settings instead. Bing and Yandex respect it.

Should I add my sitemap to robots.txt?

Yes. Adding a Sitemap directive to your robots.txt helps search engines discover your XML sitemap automatically. You can list multiple sitemaps. This is especially useful for new sites or sites with complex structures.

Custom Print on Demand Apparel — Free Storefront for Your Business
Copied to clipboard!