Robots.txt in Shopify: What You Can Actually Customize
Table of Contents
Shopify generates a robots.txt file for every store automatically. The catch: for years, you couldn't touch it. That changed in 2021 — but the controls are limited and buried. Here's what you actually can and can't do, and what matters for your store's SEO.
What Shopify Puts in Your Robots.txt by Default
Every Shopify store ships with a robots.txt that blocks crawlers from internal paths you don't want indexed: checkout, cart, search results, account pages, admin. This protects your crawl budget from thin or duplicate pages that have no business ranking in Google.
The default file looks something like this:
User-agent: * Disallow: /admin Disallow: /cart Disallow: /orders Disallow: /checkouts Disallow: /account Disallow: /search Disallow: /apple-pay-session Sitemap: https://yourstore.myshopify.com/sitemap.xml
Your sitemap URL is added automatically. So the basics are handled. The problem is when you need to block something custom — a staging collection, a discount URL pattern, or an internal tool — and the default file doesn't cover it.
How to Edit robots.txt in Shopify (Liquid Method)
Starting with Shopify 2.0 themes (Dawn and later), you can override the robots.txt using a Liquid template. Go to your theme editor, click the three dots next to the theme, choose Edit code, and look for a templates folder. Create a new file: robots.txt.liquid.
Once that file exists, it takes over completely from Shopify's auto-generated version. You have full control — but you also take on full responsibility. You need to manually include all the default rules or you'll accidentally unblock paths that should stay blocked.
The safest approach is to start with the full default output (copy it from your live robots.txt at yourstore.com/robots.txt), paste it into the Liquid file, then add your custom rules. Test by loading the URL after saving.
Older themes (pre-2.0) can't use this method at all. If you're on an older theme, your only option is an app like "Robots.txt & Noindex Manager" or upgrading the theme.
Sell Custom Apparel — We Handle Printing & Free ShippingWhat's Worth Blocking (and What Isn't)
Paths worth adding to your custom robots.txt beyond Shopify defaults:
- /collections/?sort_by= — Sort filter URLs create duplicate collection pages
- /products? — Faceted search parameters if you use a filter app
- /pages/password — Password pages shouldn't appear in search
- /discount/ — Discount code landing pages
What you should NOT block:
- Product pages — your core ranking assets
- Collection pages — important for category SEO
- Blog posts — content you want to rank
- Your sitemap — this should stay accessible
A common mistake is blocking /collections/ entirely to handle duplicates instead of blocking just the sort parameters. That kills your category rankings. Be surgical.
How to Test Before You Break Anything
Before saving changes live, paste your robots.txt content into a robots.txt generator or checker that does rule validation. It'll flag syntax errors — misplaced Disallow lines, missing User-agent headers, or broken wildcard patterns — before Google sees them.
After saving, check your live URL immediately: yourstore.com/robots.txt. The page should load the updated content within a minute or two. Then use Google Search Console's robots.txt tester to confirm specific URLs are being handled the way you expect.
Changes to robots.txt take time to propagate — Googlebot won't see your update until its next crawl of that file. For urgent blocks (like removing a page from the index), a noindex meta tag on the page itself works faster than robots.txt.
Robots.txt vs. Noindex — Which to Use in Shopify
These do different things. Robots.txt tells crawlers not to visit a URL. Noindex tells them to visit but not include in search results. The difference matters:
If you want to remove a page from Google's index, noindex wins. A disallowed URL can still rank if it has external links pointing to it — Google knows the page exists even if it can't crawl it.
If you want to save crawl budget (stop Googlebot from wasting time on thousands of filter URLs), robots.txt Disallow is the right tool. Noindex doesn't stop crawling.
For most Shopify SEO situations: use noindex for thin pages you've already indexed and want removed, use robots.txt to block entire URL patterns before they get crawled in the first place.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Robots.txt GeneratorFrequently Asked Questions
Can I edit robots.txt on Shopify without a developer?
Yes, if you're on a 2.0 theme (Dawn or similar). Go to theme code editor and create a robots.txt.liquid file. It's text-based — no coding knowledge needed, but you do need to be careful not to block important paths.
Does blocking /search in robots.txt hurt Shopify SEO?
No. Internal search result pages are thin, duplicate content. Blocking them is standard practice and protects your crawl budget for pages that actually matter.
What happens if my Shopify robots.txt has a syntax error?
Googlebot is fairly forgiving but may misinterpret your rules. Always validate your file after editing using a robots.txt checker before publishing.
Can robots.txt remove a Shopify page from Google?
Not reliably. Blocking a URL stops Googlebot from crawling it but won't remove it from the index if it's already there or if other sites link to it. Use noindex for removal.
How long does it take for Shopify robots.txt changes to take effect?
Your live robots.txt updates immediately after you save. But Googlebot only re-crawls the file periodically — changes may take days to weeks before affecting your crawl behavior.

