Crawled - Currently Not Indexed: How to Resolve It
The "Crawled - currently not indexed" status in Google Search Console means Googlebot visited and downloaded your page, but Google chose not to add it to its search index. This is distinct from pages that haven't been crawled at all. Google actively decided this page isn't worth indexing — which means you need to address why.
Why Google Chooses Not to Index a Crawled Page
Google's indexing decision is based on whether it believes the page adds unique value to its index. The most common reasons it opts out:
- Thin content: Pages with very little text, mostly navigation, or fewer than 300 words of substantive content are routinely skipped.
- Near-duplicate content: If your page is very similar to another page on your site or on the web, Google will typically index only the "best" version.
- Weak canonical signals: If canonical tags conflict, or if you don't have a canonical at all and the page appears on multiple URLs, Google may pick a different canonical than you intended.
- Orphan pages: Pages with no internal links pointing to them are harder for Google to assess authority and relevance — they get deprioritized.
- Low E-E-A-T signals: Pages lacking author information, references, or demonstrable expertise on their topic may be skipped for YMYL (Your Money Your Life) content areas.
- Crawl budget exhaustion: Large sites with many low-quality pages can cause Google to deprioritize crawling and indexing less important sections.
Step-by-Step Fix Sequence
- Run the URL Inspection Tool in Search Console on the affected URL. Check whether Google's selected canonical matches your declared canonical. If they differ, that's the root problem.
- Audit content quality. Read the page as a user. Does it answer a specific question fully? Does it have more than 400 words of original, useful text? If not, expand it or merge it with a related page.
- Add internal links from high-authority pages on your site pointing to the affected URL. A page with no inbound internal links looks like an orphan — even one or two strong internal links help significantly.
- Fix canonical inconsistencies. Ensure the page has a self-referential canonical tag:
<link rel="canonical" href="https://yourdomain.com/page-url">. Check that no other page canonicalises to a different version of this URL. - Submit only high-value URLs in your sitemap. Remove thin, near-duplicate, or low-value pages from sitemap.xml. Submitting weak pages signals to Google that you consider everything equally important.
- Make a meaningful content update, then request indexing via the URL Inspection Tool. Requesting indexing on an unchanged page rarely helps.
How Long Does It Take After Fixing?
After making substantial improvements and requesting indexing, most pages are re-evaluated within 2–4 weeks for small sites. For larger sites or pages with historically weak signals, it can take 4–8 weeks. Consistent internal linking and inclusion in the sitemap speeds up recrawling.
What to Avoid
- Repeatedly requesting indexing without making meaningful changes — Google throttles requests from sites that spam the tool.
- Programmatically generating large numbers of near-identical pages with minor variations (different city names, product sizes, etc.) without unique content for each.
- Blocking CSS, JavaScript, or image resources in robots.txt that are needed for rendering the page's main content — Googlebot needs to render the page to assess its quality.
- Using noindex accidentally — double-check that the affected page doesn't have a
meta name="robots" content="noindex"tag.
Related: Organic Rankings Dropped | Page Not Indexed in Google.