FixThatAppAll Tools
SEO

Robots.txt Generator

Build custom robots.txt entries for crawler control.

How This Tool Works

The Robots.txt Generator creates a robots.txt file — a plain text file placed at the root of your website that tells search engine crawlers which pages they are and aren't allowed to access. Common uses: blocking admin pages (/admin/), staging environments, duplicate parameter URLs, and private internal tools. Important: robots.txt controls crawl access, not indexing. A page blocked by robots.txt can still appear in search results if another site links to it — use noindex meta tags to prevent indexing.

How to Use

  1. Select the crawler to configure (use '*' for all crawlers, or specify Googlebot, Bingbot, etc.).
  2. Enter paths to disallow (e.g. /admin/, /staging/, /private/).
  3. Add your sitemap URL at the bottom.
  4. Place the generated file at yourdomain.com/robots.txt and test it in Google Search Console.

Common Questions

Does blocking a URL in robots.txt remove it from Google?

No. Blocking crawling prevents Google from reading the page's content, but Google can still know the URL exists from external links. To remove a page from search results, use a noindex meta tag AND allow crawling so Google can see the noindex instruction.

Can robots.txt block JavaScript and CSS files?

You can, but you shouldn't. Google needs to render JavaScript and load CSS to understand your pages properly. Blocking Google from these resources makes your pages look broken to the crawler, potentially hurting rankings.

What is the difference between Disallow: / and Disallow: /admin/?

Disallow: / blocks the entire site. Disallow: /admin/ blocks only URLs starting with /admin/. A trailing slash is important — Disallow: /admin (no slash) would block /admin but not /administration.