Whatever You Need To Understand About The X-Robots-Tag HTTP Header

Posted by

Search engine optimization, in its many basic sense, relies upon one thing above all others: Online search engine spiders crawling and indexing your website.

But almost every website is going to have pages that you don’t wish to include in this exploration.

For instance, do you actually desire your personal privacy policy or internal search pages showing up in Google results?

In a best-case circumstance, these are doing nothing to drive traffic to your site actively, and in a worst-case, they might be diverting traffic from more important pages.

Luckily, Google allows web designers to tell search engine bots what pages and material to crawl and what to overlook. There are numerous ways to do this, the most typical being utilizing a robots.txt file or the meta robots tag.

We have an excellent and comprehensive description of the ins and outs of robots.txt, which you ought to definitely read.

But in top-level terms, it’s a plain text file that lives in your website’s root and follows the Robots Exemption Procedure (ASSOCIATE).

Robots.txt offers crawlers with directions about the site as a whole, while meta robotics tags consist of instructions for specific pages.

Some meta robots tags you may employ consist of index, which informs search engines to add the page to their index; noindex, which informs it not to add a page to the index or include it in search results page; follow, which instructs a search engine to follow the links on a page; nofollow, which informs it not to follow links, and an entire host of others.

Both robots.txt and meta robots tags are useful tools to keep in your tool kit, but there’s likewise another method to instruct search engine bots to noindex or nofollow: the X-Robots-Tag.

What Is The X-Robots-Tag?

The X-Robots-Tag is another way for you to manage how your web pages are crawled and indexed by spiders. As part of the HTTP header response to a URL, it controls indexing for a whole page, as well as the particular elements on that page.

And whereas utilizing meta robots tags is relatively straightforward, the X-Robots-Tag is a bit more complex.

But this, obviously, raises the question:

When Should You Utilize The X-Robots-Tag?

According to Google, “Any regulation that can be utilized in a robots meta tag can likewise be defined as an X-Robots-Tag.”

While you can set robots.txt-related regulations in the headers of an HTTP response with both the meta robotics tag and X-Robots Tag, there are certain circumstances where you would want to utilize the X-Robots-Tag– the two most common being when:

  • You wish to control how your non-HTML files are being crawled and indexed.
  • You want to serve directives site-wide instead of on a page level.

For instance, if you wish to block a particular image or video from being crawled– the HTTP action technique makes this easy.

The X-Robots-Tag header is likewise useful due to the fact that it permits you to combine several tags within an HTTP action or utilize a comma-separated list of instructions to specify regulations.

Perhaps you don’t want a particular page to be cached and want it to be unavailable after a particular date. You can utilize a mix of “noarchive” and “unavailable_after” tags to instruct online search engine bots to follow these guidelines.

Basically, the power of the X-Robots-Tag is that it is a lot more flexible than the meta robotics tag.

The benefit of using an X-Robots-Tag with HTTP actions is that it allows you to use regular expressions to execute crawl instructions on non-HTML, as well as use criteria on a bigger, international level.

To help you comprehend the distinction in between these directives, it’s useful to classify them by type. That is, are they crawler directives or indexer instructions?

Here’s a handy cheat sheet to explain:

Crawler Directives Indexer Directives
Robots.txt– utilizes the user representative, allow, disallow, and sitemap instructions to define where on-site online search engine bots are allowed to crawl and not allowed to crawl. Meta Robots tag– permits you to define and avoid search engines from revealing specific pages on a website in search engine result.

Nofollow– enables you to specify links that should not hand down authority or PageRank.

X-Robots-tag– permits you to control how defined file types are indexed.

Where Do You Put The X-Robots-Tag?

Let’s say you want to obstruct specific file types. A perfect technique would be to include the X-Robots-Tag to an Apache setup or a.htaccess file.

The X-Robots-Tag can be added to a website’s HTTP actions in an Apache server configuration via.htaccess file.

Real-World Examples And Utilizes Of The X-Robots-Tag

So that sounds terrific in theory, however what does it appear like in the real life? Let’s take a look.

Let’s say we desired online search engine not to index.pdf file types. This configuration on Apache servers would look something like the below:

Header set X-Robots-Tag “noindex, nofollow”

In Nginx, it would appear like the listed below:

place ~ * . pdf$ add_header X-Robots-Tag “noindex, nofollow”;

Now, let’s look at a various circumstance. Let’s say we wish to use the X-Robots-Tag to block image files, such as.jpg,. gif,. png, and so on, from being indexed. You could do this with an X-Robots-Tag that would look like the below:

Header set X-Robots-Tag “noindex”

Please note that understanding how these regulations work and the impact they have on one another is vital.

For instance, what occurs if both the X-Robots-Tag and a meta robots tag are located when crawler bots discover a URL?

If that URL is obstructed from robots.txt, then particular indexing and serving directives can not be found and will not be followed.

If regulations are to be followed, then the URLs consisting of those can not be disallowed from crawling.

Check For An X-Robots-Tag

There are a couple of various techniques that can be used to look for an X-Robots-Tag on the website.

The simplest method to check is to install a browser extension that will tell you X-Robots-Tag information about the URL.

Screenshot of Robots Exclusion Checker, December 2022

Another plugin you can utilize to identify whether an X-Robots-Tag is being used, for example, is the Web Developer plugin.

By clicking the plugin in your browser and browsing to “View Response Headers,” you can see the various HTTP headers being utilized.

Another approach that can be used for scaling in order to identify issues on sites with a million pages is Shrieking Frog

. After running a site through Screaming Frog, you can browse to the “X-Robots-Tag” column.

This will reveal you which sections of the site are utilizing the tag, along with which specific regulations.

Screenshot of Screaming Frog Report. X-Robot-Tag, December 2022 Utilizing X-Robots-Tags On Your Site Understanding and managing how online search engine interact with your site is

the cornerstone of seo. And the X-Robots-Tag is an effective tool you can utilize to do just that. Just know: It’s not without its threats. It is very simple to make a mistake

and deindex your whole site. That said, if you read this piece, you’re most likely not an SEO novice.

So long as you utilize it wisely, take your time and examine your work, you’ll find the X-Robots-Tag to be a helpful addition to your arsenal. More Resources: Included Image: Song_about_summer/ Best SMM Panel