Facebook Clarifies Policy On Site Scrapers As Robots.Txt Gets Updated

Want to scrape Facebook’s site for content? You may want to reconsider how you do so as Facebook has updated their Robots.txt file to be a bit more restrictive. If you aren’t aware of what a Robots.txt file is, you can read more here. Ultimately the Robots.txt file simply restricted certain pages from being indexed by anybody. Now Facebook has become more explicit within the file, limiting indexing to Baidu, Google, MSN, Naver, Slurp, Yandex, and other search engines.

AW+

WORK SMARTER - LEARN, GROW AND BE INSPIRED.

Subscribe today!

To Read the Full Story Become an Adweek+ Subscriber

View Subscription Options

Already a member? Sign in