Facebook Is Hiring 1,000 People to Review Ads and Monitor Targeting

10 million people in the U.S. saw Russia-linked ads

The social network plans to hire more people to monitor ads on the platform that might violate its policies.
Getty Images

Facebook is planning to hire 1,000 more employees dedicated to reviewing and removing malicious or fake ads on the platform, a company spokesperson confirmed this morning.

The news comes in light of Facebook’s discovery that a Russia-linked organization bought more than 3,000 ads on the platform, which the social networks says it plans to hand over to Congress today for its investigation as to whether international agents attempted to influence the 2016 presidential election.

While it was initially unclear how many users saw the ads, Facebook on Monday evening announced around 10 million people in the U.S. saw at least one ad. Of those, 44 percent of total ad impressions were seen before the election and 56 occurred after. The company said users never saw 25 percent of the Russia-linked ads because Facebook’s algorithm deemed them as irrelevant.

According to Facebook, the additional employees will be added to the global ads review team. The company says it also plans to invest more into machine learning to better understand when to flag or remove ads, while also identifying ad targeting that might be used in ways that break Facebook’s policies even before the ads run. As part of these changes, ads that use certain types of targeting will be sent for manual review.

In addition to increasing staff and investment in machine learning, Facebook also plans to update its policies so that buying ads related to U.S. federal elections will require “more thorough documentation” while also confirming the business or organization they represent. (The company also plans to work with industry and government leaders around the world to help share information about “bad actors.”)

Last month, Facebook revealed it had identified nearly 500 fake accounts that had spent more than $100,000 on the platform focused on hot-button issues during the divisive campaign season.

The initiatives announced today in some ways echo the response Facebook took this spring after the platform was accused of not doing enough to keep violent videos from being spread online. In April it announced plans to use artificial intelligence to prevent videos that violate the company’s policies from being seen. A few weeks later, it announced plans to hire another 3,000 people worldwide on its community operations team to review sensitive material such as violence, hate speech or child exploitation.

Facebook isn’t the only company working to address how Russian operations might have interfered to influence voters. Last week, Twitter revealed it had found more than 200 Russia-linked accounts. However, members of the Senate and House Intelligence Committees briefed by Twitter execs last week said they were less than impressed by the thoroughness of the internal investigation.

While last week’s meeting was closed to the public, the committee’s vice chair, U.S. Senator Mark Warner, D-Virginia, said last month that he would like for the social networks to have to provide an account during a more formal briefing. Other companies, such as Google, could also be asked to testify about its own operations.

UPDATE: This story has been updated to include how many people saw ads on Facebook that were bought to the Russia-linked accounts.