Saturday, April 6, 2024

The Story of Blocking 2 Excessive-Rating Pages With Robots.txt

Must read

I blocked two of our rating pages utilizing robots.txt. We misplaced a place right here or there and the entire featured snippets for the pages. I anticipated much more affect, however the world didn’t finish.


I don’t advocate doing this, and it’s completely attainable that your outcomes could also be completely different from ours.

I used to be making an attempt to see the affect on rankings and visitors that the removing of content material would have. My idea was that if we blocked the pages from being crawled, Google must depend on the hyperlink indicators alone to rank the content material.

Nonetheless, I don’t assume what I noticed was really the affect of eradicating the content material. Perhaps it’s, however I can’t say that with 100% certainty, because the affect feels too small. I’ll be operating one other check to verify this. My new plan is to delete the content material from the web page and see what occurs.

My working idea is that Google should still be utilizing the content material it used to see on the web page to rank it. Google Search Advocate John Mueller has confirmed this conduct within the previous.

Up to now, the check has been operating for practically 5 months. At this level, it doesn’t seem to be Google will cease rating the web page. I think, after some time, it would possible cease trusting that the content material that was on the web page remains to be there, however I haven’t seen proof of that taking place.

Hold studying to see the check setup and affect. The principle takeaway is that by accident blocking pages (that Google already ranks) from being crawled utilizing robots.txt in all probability isn’t going to have a lot affect in your rankings, and they’re going to possible nonetheless present within the search outcomes.

I selected the identical pages as used within the “affect of hyperlink” examine, aside from the article on website positioning pricing as a result of Joshua Hardwick had simply up to date it. I had seen the affect of eradicating the hyperlinks to those articles and needed to check the affect of eradicating the content material. As I stated within the intro, I’m unsure that’s really what occurred.

I blocked these two pages on January 30, 2023:

These strains had been added to our robots.txt file:

  • Disallow: /weblog/top-bing-searches/
  • Disallow: /weblog/top-youtube-searches/

As you possibly can see within the charts beneath, each pages misplaced some visitors. Nevertheless it didn’t lead to a lot change to our visitors estimate like I used to be anticipating.

Organic traffic chart for the "Top YouTube Searches" article showing a bit of a drop
Visitors for the “Prime YouTube Searches” article.
Organic traffic chart for the "Top Bing Searches" article showing a bit of a drop
Visitors for the “Prime Bing Searches” article.

Trying on the particular person key phrases, you possibly can see that some key phrases misplaced a place or two and others really gained rating positions whereas the web page was blocked from crawling.

Probably the most attention-grabbing factor I observed is that they misplaced all featured snippets. I suppose that having the pages blocked from crawling made them ineligible for featured snippets. Once I later eliminated the block, the article on Bing searches shortly regained some snippets.

"Top Bing Searches" keywords were down one or two positions and lost featured snippets
Natural key phrases for the “Prime Bing Searches” article.
"Top YouTube Searches" keywords had mixed results (some up and some down) and also lost featured snippets
Natural key phrases for the “Prime YouTube Searches” article.

Probably the most noticeable affect to the pages is on the SERP. The pages misplaced their customized titles and displayed a message saying that no info was accessible as an alternative of the meta description.

SERP listing for "Top YouTube Searches" when blocked
SERP listing for "Top Bing Searches" when blocked

This was anticipated. It occurs when a web page is blocked by robots.txt. Moreover, you’ll see the “Listed, although blocked by robots.txt” standing in Google Search Console in the event you examine the URL.

"Indexed, though blocked by robots.txt" shown in the GSC Inspection Tool

I consider that the message on the SERPs damage the clicks to the pages greater than the rating drops. You’ll be able to see some drop within the impressions, however a bigger drop within the variety of clicks for the articles.

Visitors for the “Prime YouTube Searches” article:

Traffic drop for the "Top YouTube Searches" article, via Google Search Console

Visitors for the “Prime Bing Searches” article:

Traffic drop for the "Top Bing Searches" article, via Google Search Console

Last ideas

I don’t assume any of you may be stunned by my commentary on this. Don’t block pages you need listed. It hurts. Not as unhealthy as you may assume it does—however it nonetheless hurts.

Supply hyperlink

More articles


Please enter your comment!
Please enter your name here

Latest article