Today is the 20th anniversary of the robots.txt directive being
available for webmasters to block search engines from crawling their
pages.
The robots.txt was created by Martijn Koster in 1994 while he was
working at Nexor after having issues with crawlers hitting his sites too
hard. All major search engines back then, including WebCrawler, Lycos
and AltaVista, quickly adopted it; and even 20 years later, all major
search engines continue to support it and obey it.
Brian Ussery posted on his blog
about the 20-year anniversary and documented the most common robots.txt
mistakes he has seen over his SEO tenure. It is well worth scanning
through, because the robots.txt, if implemented wrong, can be severely
detrimental to your rankings and search marketing success.
No comments:
Post a Comment