The robots exclusion standard, also known as the robots exclusion protocol or robots.txt protocol, is a standard used by websites to communicate with web crawlers and other web robots.
When search engine spiders crawl a website, they typically start by identifying a robots.txt file at the root domain level. Upon identification, the crawler reads the file’s directives to identify directories and files that may be blocked. Blocked filed can be created with the robots.txt generator; these files are, in some ways, the opposite of those in a website’s sitemap, which typically includes pages to be included when a search engine crawls a website.
Use our Robots.txt Generator to generate your website robots.txt file.