noindex and nofollow Meta Tags — What They Mean and When to Use Each
Table of Contents
noindex and nofollow are two of the most commonly confused directives in SEO. They do completely different things, and using the wrong one — or combining them incorrectly — can cause search ranking problems that take months to diagnose.
This guide explains exactly what each directive does, the specific situations where each is appropriate, and how to add them to any page in under a minute.
What noindex Does — and When to Use It
A noindex directive tells search engines not to include a page in their search index. The page can still be crawled and linked to — crawlers will visit it, read the directive, and then remove the URL from (or prevent it from entering) the search index. Users with the direct URL can still access the page.
Add it as a meta tag in the head element: <meta name="robots" content="noindex">
Use noindex on:
- Thank you pages — After form submissions or purchases. These pages have no search value and thin content.
- Login and account pages — /login, /account, /dashboard. Private pages that should not appear in search results.
- Duplicate content pages — Print versions, filtered category pages with little unique content (e.g., ?color=red&size=large).
- Staging environments — Noindex your staging site to prevent it from competing with production in search results.
- Search results pages — Your site's internal search pages (e.g., yoursite.com/?s=query) produce thin, duplicate content.
- Admin and utility pages — /wp-admin, /checkout, /cart. No search value.
What nofollow Does — and When to Use It
A nofollow directive on a page-level robots tag tells search engines not to follow any links on that page. This is different from the rel="nofollow" attribute on an individual link, which applies only to that one link.
Page-level nofollow: <meta name="robots" content="nofollow">
Link-level nofollow: <a href="/page" rel="nofollow">Link text</a>
Page-level nofollow is rarely the right tool. It prevents crawlers from following all links on the page, including navigation links, footer links, and links to your own content — which can prevent important pages from being discovered. Use link-level nofollow instead for specific links you don't want to pass value to.
Common use cases for link-level nofollow:
- User-generated content links (comment sections, forums)
- Paid or sponsored links (legally required for disclosure)
- Links to sites you don't fully vouch for
Google also recognizes rel="sponsored" for paid links and rel="ugc" for user-generated content. These are more specific signals than nofollow.
Sell Custom Apparel — We Handle Printing & Free ShippingWhen to Combine noindex and nofollow
You can combine both directives: <meta name="robots" content="noindex, nofollow">
Use the combination when you want to both hide the page from search results AND prevent link equity from passing through the page's outbound links. This is the most aggressive crawl restriction short of blocking via robots.txt.
Specific scenarios for noindex + nofollow:
- Gateway or bridge pages with no SEO value and external links
- Login pages that link to partner or affiliate sites
- Error pages that might link to other resources
For most pages, noindex alone is sufficient. The follow/nofollow directive on a noindexed page is somewhat moot — search engines use it as a signal, but the practical impact of crawlers not following links on a page they're not indexing is usually minimal.
What you generally should NOT do: noindex a page while forgetting it's also blocked in robots.txt. If a page is disallowed in robots.txt, crawlers can't read the noindex meta tag — so the disallow in robots.txt takes precedence. This is a common configuration mistake that causes pages to stay in the index unintentionally.
noindex Meta Tag vs robots.txt — Which to Use
Both noindex and robots.txt can prevent pages from appearing in search results, but they work differently and should not be confused:
| noindex meta tag | robots.txt disallow | |
|---|---|---|
| What it prevents | Indexing (not crawling) | Crawling (not indexing) |
| Can read the page? | Yes, bots visit and read the noindex | No, bots are blocked from fetching |
| Can follow links? | Yes (unless nofollow is also set) | No |
| URL stays in index? | Removed over time | May stay (URL discovered via links) |
| Best for | Pages that exist and should not be in SERP | Reducing crawl budget on high-volume paths |
Use robots.txt for: large sections of a site you never want crawled (admin, API endpoints, duplicate parameter URLs). Use noindex for: individual pages that need to be reachable but not in the search index.
How to Add noindex and nofollow to Any Page
The simplest implementation: paste the tag directly into your HTML head element.
noindex only: <meta name="robots" content="noindex">
nofollow only: <meta name="robots" content="nofollow">
Both: <meta name="robots" content="noindex, nofollow">
In WordPress with Yoast SEO: Edit the post or page, go to the SEO tab, click Advanced, and set "Allow search engines to show this Page in search results?" to No.
In WordPress with Rank Math: Edit the post, open Rank Math's Advanced tab, and set "Robots Meta" to noindex.
In Next.js: Set robots: { index: false, follow: false } in your metadata export.
In Shopify: You can add noindex to specific pages through theme customization or a Shopify SEO app. By default, Shopify automatically adds noindex to cart, account, and checkout pages.
Generate the correct robots tag HTML using the Meta Tag Generator — select your index and follow settings from the dropdowns and copy the output.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Meta Tag GeneratorFrequently Asked Questions
Does noindex remove a page from Google immediately?
No. Google needs to crawl the page again to see the noindex directive and then process the removal from its index. This typically takes days to weeks depending on how often Googlebot crawls your site. High-traffic, frequently-crawled sites see faster removal. There is no way to force immediate removal via noindex — use Google Search Console's URL Removal tool for urgent cases.
Should I use noindex or just delete the page?
It depends. If the page has external inbound links, deleting it creates 404 errors and wastes any link value pointing at it. Consider noindex + redirect to a relevant page instead. If the page has no external links and no purpose, deleting it and setting up a 301 redirect to an appropriate URL is clean. noindex is best when you want the page accessible but invisible to search engines.
What happens to a noindexed page after I remove the tag?
Once you remove the noindex tag, Google can index the page again on its next crawl. There is no permanent penalty for having been noindexed previously. The page will be treated as a new page to evaluate for indexing. If it has quality content and good signals, it will appear in search results within days to weeks of the noindex tag removal.

