What triggers this issue?

This issue reports all internal page URLs of a website with HTTP status code of 403 (Forbidden) that receive some organic search traffic (as per data from Ahrefs’ Site Explorer).

Why is it important?

403 (Forbidden) HTTP response code indicates that the client is not permitted to access the resource for some reason besides authentication. Given that the page receives organic traffic, it might have changed its status to 403 not so long ago. These pages will be removed from the search index only after search engines re-crawl them (which may take a while).

People landing on a 403 page from search results will definitely not be satisfied and most likely will leave your website immediately. This provides poor user experience and might send negative user behaviour signals to Google.

You should note that organic search traffic in Ahrefs is not real-time data, and search engines might have already deindexed the 403 URLs.

How to fix it?

URLs that return the 403 response code will be removed from Google's index upon a re-crawl.

You can use Fetch as Google tool in Google Search Console to ask Google to re-crawl your URL. This will speed up the de-indexing process.

If the page is supposed to be indexable, make sure it returns the "200 OK" HTTP response code.

You should also note that a DoS protection system or the firewall configuration can block access to the website from certain locations, making them return the 403 code to specific users. Also, these systems may be blocking our crawler specifically on a server level, while for visitors this page will be live.

 

Did this answer your question?