Build a valid robots.txt file with crawl rules, sitemaps, and common presets. Preview live, then copy or download.
Create a robots.txt file for your website in seconds. Choose your user-agent, add allow/disallow rules with common presets, include sitemap URLs, and set a crawl-delay. Preview the output live, then copy or download the file. No signup, no server processing — everything runs in your browser.
A robots.txt file tells search engine crawlers which pages or sections of your site they can or cannot request. It sits at the root of your domain (e.g., example.com/robots.txt) and is one of the first files crawlers check before indexing your site.
Not exactly. Robots.txt tells crawlers not to crawl a page, but if other sites link to that page, Google may still index the URL (without content). To fully prevent indexing, use a noindex meta tag or X-Robots-Tag header instead.
Crawl-delay tells bots to wait a specified number of seconds between requests. This can reduce server load from aggressive crawlers. Note: Googlebot ignores crawl-delay — use Google Search Console's crawl rate settings instead. Bing and Yandex respect it.
Yes. Adding a Sitemap directive to your robots.txt helps search engines discover your XML sitemap automatically. You can list multiple sitemaps. This is especially useful for new sites or sites with complex structures.