Robots Meta Tag Directives — The Complete Reference
Table of Contents
The robots meta tag is the most direct way to control how search engines index and display your pages. Most SEO guides cover only noindex and nofollow — but there are eight additional directives that control snippet length, image preview size, caching, and more.
This reference covers every documented robots meta tag directive, what it does, which search engines support it, and when to use it.
How the Robots Meta Tag Works
The robots meta tag lives in your HTML head element and provides instructions to search engine crawlers about how to handle the page:
<meta name="robots" content="noindex, nofollow">
The name attribute can be "robots" (applies to all crawlers) or a specific crawler name: "googlebot," "bingbot," "msnbot," etc. Crawler-specific rules override the generic "robots" rule for that crawler.
Multiple directives are comma-separated in the content attribute. Directives are additive — specifying both "noindex" and "nofollow" applies both instructions simultaneously.
Critical distinction from robots.txt: the robots meta tag is read only after a crawler fetches the page. If a page is blocked in robots.txt (Disallow), crawlers won't fetch it and therefore can't read the robots meta tag. Never put a noindex tag on a page that's also blocked in robots.txt — it won't work.
Unlike robots.txt, the robots meta tag applies at the page level. If you need to noindex an entire directory, you need to either add the tag to every page in that directory or use the X-Robots-Tag HTTP response header (which works the same way but is set server-side).
noindex and nofollow — The Most Used Directives
noindex
Tells search engines not to include this page in their index. Crawlers will still visit the page to read the directive and will eventually remove the URL from the index. Supported by Google, Bing, and all major crawlers.
Use for: admin pages, thank-you pages, login pages, staging environments, duplicate content, internal search results.
nofollow (page-level)
Tells crawlers not to follow any links on this page. Note: this is the page-level directive. Link-level nofollow is applied with rel="nofollow" on individual anchor tags. The page-level directive is rarely the right tool — it blocks link equity flow through all links on the page, including your own navigation.
Use page-level nofollow for: pages that should not pass any link value — rare edge cases like gateway pages or utility pages with no outbound links worth following.
none
Equivalent to "noindex, nofollow" — shorthand for both directives simultaneously.
all
The default behavior. Equivalent to "index, follow." You never need to explicitly set this, but it can be useful as a reset if you're overriding a global noindex rule.
noarchive, nosnippet & nocache — Less Common but Useful
noarchive
Prevents search engines from showing a cached version of the page in search results. Google shows a "Cached" link in SERPs for most pages — noarchive removes that link and the associated cached version. The page still gets indexed and appears in search results.
Use for: pages with time-sensitive or regularly updated content (live pricing, real-time data, stock availability), pages with confidential content you don't want archived, or compliance situations where showing old versions of content could create legal issues.
nosnippet
Prevents Google from showing any snippet (the descriptive text) in search results — neither the meta description nor extracted text from the page. The page appears in results with just the title and URL. Also prevents Google from using the page for featured snippets and People Also Ask boxes.
Use for: pages where you want to control click-through tightly, or pages with licensing restrictions on content display. Rare — most sites don't want to prevent snippets.
nocache
An older directive equivalent to noarchive. Some search engines treat them identically. For modern implementations, prefer noarchive — it's more specifically documented.
noodp
Prevented search engines from using the Open Directory Project (DMOZ) description. DMOZ shut down in 2017. This directive is now completely obsolete.
max-image-preview and max-snippet — Control Display in Results
max-image-preview:[setting]
Controls the maximum size of image previews Google displays for this page in search results and Google Discover. Three settings:
- max-image-preview:none — No image thumbnails shown
- max-image-preview:standard — Standard (small) thumbnail size
- max-image-preview:large — Large image previews, including in Google Discover. This is what you want for most content — larger images in Discover get significantly more clicks.
The default (without specifying this tag) is typically "standard." Setting it to "large" explicitly opts in to the largest available display size across all Google surfaces.
<meta name="robots" content="max-image-preview:large">
max-snippet:[number]
Controls the maximum text snippet length in characters. Use "max-snippet:-1" for no limit (default in most cases). Use "max-snippet:0" to prevent snippets entirely (equivalent to nosnippet). Specific numbers (e.g., max-snippet:50) limit to that many characters.
max-video-preview:[number]
Same concept for video previews. max-video-preview:-1 allows unlimited video preview length. max-video-preview:0 prevents video previews.
How to Add Robots Meta Tags Without a Plugin
Generate robots meta tags using the Meta Tag Generator — select your index (index/noindex) and follow (follow/nofollow) settings from the dropdowns, and the generator outputs the correct HTML.
For directives not covered by the generator (noarchive, max-image-preview, etc.), add them manually to your HTML head:
<!-- Standard robots tag --> <meta name="robots" content="index, follow"> <!-- With max-image-preview for Discover --> <meta name="robots" content="index, follow, max-image-preview:large"> <!-- Noindex with cache prevention --> <meta name="robots" content="noindex, noarchive"> <!-- Googlebot-specific, different from other crawlers --> <meta name="googlebot" content="noindex"> <meta name="robots" content="index, follow">
In WordPress: Yoast SEO's Advanced tab includes noindex, nofollow, and noarchive selectors per-page. Rank Math's Advanced tab has similar controls. For max-image-preview:large, add it via the theme's header.php or through a custom head code plugin.
In Next.js: Set robots: { index: true, follow: true, 'max-image-preview': 'large' } in your metadata export. The App Router handles the correct HTML output.
Try It Free — No Signup Required
Runs 100% in your browser. No data is collected, stored, or sent anywhere.
Open Free Meta Tag GeneratorFrequently Asked Questions
What is the difference between robots.txt and the robots meta tag?
robots.txt controls which URLs crawlers are allowed to fetch. The robots meta tag controls what crawlers do with the page after fetching it (index it, follow links, cache it). A page blocked in robots.txt cannot be crawled at all — meaning crawlers cannot read its robots meta tag. A noindexed page (via meta tag) can be crawled but will not appear in search results. Use robots.txt for crawl budget control; use the meta tag for indexing control.
Should I use max-image-preview:large on every page?
Yes, for most sites. Setting max-image-preview:large on your article and landing pages explicitly opts into Google's largest image display sizes in Google Discover and image-rich search results. This is especially valuable for content that relies on visual appeal — recipes, travel, design, product pages. There's no cost to setting it unless you specifically want to restrict image display size.
Can I use multiple robots meta tags on one page?
You can use multiple meta tags with the name="robots" attribute, but best practice is to combine all directives in one tag using comma separation: content="noindex, nofollow, noarchive". Multiple tags are valid HTML, but combining them in one tag is cleaner and avoids potential conflicts where different tags specify contradictory instructions.

