Which of the following are requirements in a robots.txt file?

  1. Disallow: [URL string not to be crawled]
  2.  Allow: [URL string to be crawled]
  3. Sitemap: [sitemap URL]
  4. *User-agent: [user-agent name]




The robots.txt file, also known as the robots exclusion protocol or default, is a text file that tells web robots (most commonly search engines) which pages on your site they do not crawl. A robots.txt file tells search engine crawlers which pages or files they can request from your site and which they can’t. This is mostly used to prevent the site from being overburdened with requests; it is not a tool for keeping a web page out of Google. A robots.txt file isn’t needed for most websites. This is because Google will normally locate and index all of your site’s relevant sites. They’ll also automatically exclude pages that aren’t relevant or are duplicates of other pages from indexing.

Leave a Comment