Facebook has identified and removed more than two dozen accounts involved in “inauthentic behavior,” or what can be described as part of a coordinated misinformation campaign, the company said on Tuesday.
According to Facebook, the first of the 32 accounts—eight Facebook pages, 17 Facebook profiles and seven Instagram accounts—was found two weeks ago, with origins ranging from March 2017 to May 2018. More than 290,000 user accounts followed at least one of the pages, which were given various monikers including “Aztlan Warriors,” “Black Elevation,” “Mindful Being” and “Resisters.” According to Facebook, so far none of the accounts identified have directly named a candidate or party.
“Security is an arms race, and it’s never done,” Facebook COO Sheryl Sandberg said during a call with reporters on Tuesday afternoon.
The company said the “bad actors” were more careful to hide their identities than the Russia-linked accounts that were active before and during the 2016 U.S. presidential election. However, U.S. Sen. Mark R. Warner, D-Va.—who serves as vice chairman of the Senate Intelligence Committee, and whose office released a white paper on Monday that offers regulation possibilities for Facebook and other social media platforms—issued a statement today blaming the Russian government.
“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” he said in a statement. “I also expect Facebook, along with other platform companies, will continue to identify Russian troll activity and to work with Congress on updating our laws to better protect our democracy in the future.”
Today’s “bad actors,” according to Facebook, used VPNs to maintain secrecy while using third-party advertisers to buy ads on their behalf.
According to Facebook, $11,000 was spent on 150 ads purchased on Facebook and Instagram between April 2017 and June 2018. Meanwhile, the actors created more than 9,500 organic posts along with 30 events that had between 1,400 and 4,700 interested in attending.
Not all of the administrators on the events were fake. According to Facebook, one event dubbed “No Unite the Right 2—DC” was scheduled for Aug. 10-12 in Washington, D.C. and had five real administrators. Facebook said it decided to reveal the accounts today partially because of that event. The company said it also plans to notify the administrators of the fake account, along with anyone that has interacted with content from the inauthentic accounts. (About 2,600 users had indicated interest in the event, organized by a page called Resisters, with another 600 indicating they’d attend.)
“Previous events we can assess on FB, but we can’t assess what happens in the real world—the external world,” said Nathaniel Gleicher, Facebook’s head of cybersecurity policy.
To identify cyber actors, Facebook created an attribution model focused on four categories, which included identifying political motivations and evidence of coordination, along with the types of tools and techniques used and other forensic information such as geographic location.
According to Facebook chief security officer Alex Stamos, Facebook has worked closely with U.S. government officials to share information. He said it’s up to the government, not Facebook, to attribute the bad actors to a specific group when the evidence is available.
“We don’t know for sure who is behind the activity we found, which is why we have not named a specific group or courtly,” he said on a press call on Tuesday.
Facebook, Twitter and Google have spent more than a year improving their own security measures for political advertising and organic content while also meeting with members of Congress to explain how Russian operatives and others might have manipulated voters during the 2016 campaign.
Theresa Payton, who was chief information officer for the George W. Bush administration and now serves as CEO of the cybersecurity firm Fortalice Solutions, said the social networks should consider creating a “see something, say something” tool allowing users to report suspicious activity—much like the motto New York took on to keep residents and visitors alert of any threats in the subway system since the Sept. 11 terrorist attacks.
However, any sort of user-led tool would also run the risk of censoring real voices. Payton suggested taking an “automated, but human-curated” approach that would let the social networks create a tool for automated reporting that would then be monitored and guided by engineers and others to make sure that threats reported are indeed threats.
“If they are too aggressive with automated shutdowns of accounts, or they don’t have the right checks and balances, they could accidentally censor real users,” she said. “And that is also a problem.”
Payton said that members of Congress and Trump administration need to take digital threats seriously. She recalled her time in the White House during the days of Myspace, when North Korea and Iran were exercising their own digital capabilities by attempting to hack into various government systems. (She said some people at the time would laugh off the threat posed by North Korea, even while members of the intelligence community suggested otherwise.)
“It’s going to require us basically to listen to the good guys when they say that red lights are blinking,” she said.