How to Set Up a Custom Robots.txt in Blogger
Table of Contents
Blogger generates its own robots.txt automatically. The auto-generated version is functional but generic — it doesn't know about your specific site structure or which pages you want kept out of search. Blogger also provides a custom robots.txt option buried in settings that almost nobody knows about. Here's how to use it.
Where to Find Blogger's Custom Robots.txt Setting
In the Blogger dashboard, go to Settings (left sidebar). Scroll down to the "Crawlers and indexing" section. You'll see "Enable custom robots.txt" — toggle it on. A text box appears where you enter your robots.txt content.
When you enable this, Blogger replaces its auto-generated robots.txt entirely with whatever you enter. So you need to include all the rules yourself — you can't just add to the defaults, you're replacing them.
The setting is also sometimes found under Settings > Search preferences in older Blogger interfaces. Google has reorganized these settings a few times, so if you don't see it immediately, look under Search Preferences or Basic settings.
Blogger's Default Robots.txt (What You're Replacing)
Before enabling custom robots.txt, check what Blogger currently generates at yourblog.blogspot.com/robots.txt (or your custom domain). The default typically includes:
User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: https://yourblog.blogspot.com/sitemap.xml
The Mediapartners-Google line is Google's AdSense crawler — allowing it everywhere so AdSense can crawl your content for ad targeting. Keep this in your custom version if you run AdSense. Removing it can hurt AdSense performance.
The Disallow: /search blocks Blogger's built-in search results pages from being indexed — these are thin, duplicate content pages. Keep this rule too.
Sell Custom Apparel — We Handle Printing & Free ShippingA Good Starting Custom Robots.txt for Blogger
Here's a solid baseline for most Blogger sites, adapted from the defaults:
User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: https://yourblog.com/sitemap.xml
Replace yourblog.com with your actual domain. If you're still on blogspot.com, use your full blogspot URL for the Sitemap line.
From here you can add additional Disallow rules. Common additions for Blogger:
- Disallow: /search?q= — blocks search query URLs specifically
- Disallow: /p/ — blocks static pages from your custom page URLs if you don't want them indexed (less common)
Don't add rules you don't specifically need. More Disallow lines isn't better — it's just more things that can go wrong.
Blogger's Custom Robots Header Tags (Separate Feature)
Blogger also has "Custom robots header tags" in the same Settings section. This is different from the robots.txt file — these control the meta robots tags on individual page types.
You can set per-page-type instructions for Homepage, Archive pages, Search results pages, and Error pages. Common configuration:
- Homepage: all (no restrictions) — you want your homepage indexed
- Archive pages: noindex — monthly/yearly archives are thin duplicate content
- Search pages: noindex — already blocked by robots.txt, but double protection is fine
- Error page: noindex — don't index 404 pages
These header tags are separate from robots.txt. Both can be configured independently. The robots.txt controls crawling; the header tags control indexing.
How to Check Your Blogger Robots.txt Is Working
After saving the custom robots.txt, check your live URL immediately: yourblog.com/robots.txt. It should show exactly what you entered. If it still shows the default, try clearing cache or waiting a few minutes for Blogger to process the change.
Then check that your Sitemap URL works: visit the URL you put in the Sitemap line and confirm it loads a valid XML sitemap. If the sitemap URL is broken, fix it before leaving the settings page.
Finally, check Google Search Console if your blog is verified. Submit the sitemap from your robots.txt there as well, and use URL Inspection to confirm your main posts are crawlable.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Robots.txt GeneratorFrequently Asked Questions
Does Blogger automatically create a robots.txt?
Yes. Every Blogger site has a robots.txt at /robots.txt with basic defaults. You can replace it with a custom version via Settings > Crawlers and indexing.
Should I enable custom robots.txt in Blogger?
Only if you need to add custom rules beyond Blogger's defaults. If the defaults look correct for your site, leaving custom robots.txt disabled is fine.
What happens if I leave the custom robots.txt blank?
If you enable it with blank content, your robots.txt file will be empty — which means no restrictions for any crawler. Make sure to add content before saving.
Will a custom robots.txt help my Blogger blog rank better?
Not directly. Robots.txt controls which pages get crawled, which indirectly helps by preventing duplicate thin pages from competing with your real content. The Disallow: /search rule is the most useful default.
Can I use wildcards in Blogger's custom robots.txt?
Yes. Blogger's custom robots.txt supports the same * and $ wildcards as standard robots.txt. The content you enter is treated as a standard robots.txt file.

