How HTTP status codes affect Google's crawlers

This page describes how different HTTP status codes , impact Google's ability to crawl your web content. We cover the top 20 status codes that Google encounters on the web. More exotic status codes, such as 418 (I'm a teapot) , aren't covered.

HTTP status codes

HTTP status codes are generated by the server that's hosting the site when it responds to a request made by a client, for example a browser or a crawler. Every HTTP status code has a different meaning, but often the outcome of the request is the same. For example, there are multiple status codes that signal redirection, but their outcome is the same.

Search Console generates error messages for status codes in the 4xx—5xx range, and for failed redirections ( 3xx ). If the server responded with a 2xx status code, the content received in the response may be considered for indexing.

The following table contains the most encountered HTTP status codes by Google and an explanation how Google handles each status code.

HTTP status codes

2xx (success)

Google considers the content for processing (for example, in the case of Google Search, for indexing). If the content suggests an error for Google Search, an empty page or an error message, Search Console will show a soft 404 error .

200 (success)

Google passes on whatever it received to the next processing step (which is product specific). For Google Search, the next system is the indexing pipeline. The indexing systems may index the content, but that's not guaranteed.

201 (created)
202 (accepted)

Google waits for the content for a limited time, then passes on whatever it received to the next processing step (which is product specific). The timeout is user agent dependent, for example Googlebot Smartphone may have a different timeout than Googlebot Image.

204 (no content)

Google wasn't able to receive any content and therefore can't process it.

3xx (redirection)

By default, Google's crawlers follow up to 10 redirect hops. However, specific products' crawlers may have different limits. For example, Googlebot generally follows 10 redirect hops when crawling for general web content, but Google Inspection Tools doesn't follow redirects.

Any content Google receives from the redirecting URL is ignored, and the final target URL's content is processed instead. For robots.txt files, learn how Google handles a robots.txt that returns a 3xx status code .

301 (moved permanently)

Google follows the redirect, and Google systems use the redirect as a strong signal that the redirect target should be processed.

302 (found)

By default, Google's crawlers follow the redirect, and Google systems use the redirect as a weak signal that the redirect target should be processed. Other products may handle the redirect differently.

303 (see other)
304 (not modified)

Google crawlers signal the next processing system that the content is the same as last time it was crawled. In the case of Google Search, the indexing pipeline may recalculate signals for the URL, but otherwise the status code has no effect on indexing.

307 (temporary redirect)
Equivalent to 302 .
308 (moved permanently)
Equivalent to 301 .

4xx (client errors)

Google doesn't use the content from URLs that return 4xx status codes. If a URL was previously used but is now returning 4xx status code, Google systems will stop using the URL over time. In the case of Google Search, Google doesn't index URLs that return a 4xx status code, and URLs that are already indexed and return a 4xx status code are removed from the index.

Any content Google receives from URLs that return a 4xx status code is ignored.

400 (bad request)

All 4xx errors, except 429 , are treated the same: Google crawlers inform the next processing system that the content doesn't exist.

In the case of Google Search, the indexing pipeline removes the URL from the index if it was previously indexed. Newly encountered 404 pages aren't processed. The crawling frequency gradually decreases.

401 (unauthorized)
403 (forbidden)
404 (not found)
410 (gone)
411 (length required)
429 (too many requests)

Google's crawlers treat the 429 status code as a signal that the server is overloaded, and it's considered a server error.

5xx (server errors)

5xx and 429 server errors prompt Google's crawlers to temporarily slow down with crawling. For Google Search, already indexed URLs are preserved in the index, but eventually dropped.

Any content Google receives from URLs that return a 5xx status code is ignored. For robots.txt files, learn how Google handles a robots.txt that returns a 5xx status code .

500 (internal server error)

Google decreases the crawl rate for the site. The decrease in crawl rate is proportionate to the number of individual URLs that are returning a server error. For Google Search, Google's indexing pipeline removes from the index URLs that persistently return a server error.

502 (bad gateway)
503 (service unavailable)
Design a Mobile Site
View Site in Mobile | Classic
Share by: