Robots.txt Generator
Create a custom robots.txt file for your website. Control which bots can crawl your site, block AI scrapers, set crawl delays and add your sitemap URL — without coding.
What is a robots.txt File?
The robots.txt file tells web crawlers which pages or sections of your site they are allowed or not allowed to access. It's the first file bots check when visiting your domain. Properly configured, it improves crawl efficiency and protects private areas from indexing.
What Can You Control?
- Block specific bots — Googlebot, Bingbot, AI scrapers and more
- Disallow paths — Admin pages, login, checkout, temp folders
- Crawl delay — Control crawl speed to protect server resources
- Sitemap directive — Point all bots to your XML sitemap
Frequently Asked Questions
Robots.txt is a plain text file placed at the root of your website (e.g. https://example.com/robots.txt) that instructs web crawlers which pages or sections they are allowed or not allowed to access. It's the first file bots check when visiting your site.
Disallowing a URL in robots.txt prevents Googlebot from crawling it, but does NOT prevent the URL from appearing in search results if it has external links pointing to it. To fully remove a page from Google, use the noindex meta tag.
Crawl-delay tells search engine bots to wait a specified number of seconds between requests to your server. Useful for slow shared hosting. Note: Google ignores Crawl-delay; use Google Search Console's crawl rate settings for Google instead.
Blocking AI training bots (GPTBot, ClaudeBot, CCBot) prevents your content from being used to train AI models. This does not affect SEO or Google indexing. Adding these blocks is a personal/business decision based on your content policy.
The Sitemap: directive in robots.txt tells all compliant bots where to find your XML sitemap. This is a quick way to ensure all search engines discover your sitemap without needing to submit it to each one manually.