Facebook Provides Its Latest Update on Efforts to Safeguard the 2020 U.S. Presidential Election

The social network added a tool to protect the accounts of elected officials, candidates and their staff

Content rated false by third-party fact-checkers will be more prominently labeled on Facebook and Instagram Facebook

Facebook CEO Mark Zuckerberg said during a press call Monday updating the social network’s efforts to protect the 2020 U.S. presidential election from interference via its platform that some 35,000 people are now working on security, with an overall budget in the billions of dollars.

“We have a long way to go before Election Day,” he said during the call. “We have a big responsibility to secure our platforms. Personally, this is one of my top priorities for the company. Elections have changed. Facebook has, too. After 2016, there’s just much broader awareness that this is an issue.”

Zuckerberg, vice president of integrity Guy Rosen, head of cybersecurity policy Nathaniel Gleicher, director of product management Rob Leathern and public policy director for global elections Katie Harbath detailed several product and policy updates during the call and in a Newsroom post.

The social network rolled out Facebook Protect as a way to further secure the accounts of elected officials, candidates, their staff and other people who may be frequent targets of hacking or other attacks by foreign adversaries.

Administrators of pages fitting that description can enroll here and invite other members of their organizations to do so, as well.

Participants in Facebook Protect must enable two-factor authentication, and their accounts will be monitored for suspicious activity that could indicate hacking attempts, such as login attempts from unusual locations or unverified devices.

Facebook said that if an attack is discovered against one account, all other accounts affiliated with that organization will be reviewed and protected, as well.

Facebook

More information is being added to provide transparency over the people or organizations behind pages.

A new tab, “Organizations That Manage This Page,” will provide information such as the confirmed page owner, legal name, verified city, phone number and website.

The tab will initially appear only on pages with large U.S. audiences that have gone through the social network’s business verification process, as well as pages that have completed the authorization process to run ads about social issues, elections or politics in the U.S. Confirmed page owners must be displayed on those pages starting in January.

Facebook said in its Newsroom post, “If we find that a page is concealing its ownership in order to mislead people, we will require it to successfully complete the verification process and show more information in order for the page to stay up.”

Starting in November, Facebook will label media outlets that are entirely or partially under the editorial control of their governments as state-controlled media, with those labels appearing both on the media outlets’ pages and in its Ad Library.

The social network said it developed its own definition and standards for state-controlled media organizations with input from over 40 experts globally who specialize in media, governance, human rights and development, including: Reporters Without Borders; the Center for International Media Assistance; the European Journalism Centre; Oxford Internet Institute‘s Project on Computational Propaganda; the Center for Media, Data and Society at the Central European University; the Council of Europe; and UNESCO.

The company also emphasized the difference between state-controlled media and public media, defining the latter as “any entity that is publicly financed, retains a public service mission and can demonstrate its independent editorial control.”

Facebook said it will update its list of state-controlled media on a rolling basis starting next month, with plans to expand its labeling to specific posts and to Instagram in 2020, adding, “We will hold these pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state.”

Over the next month, content that was rated false or partially false by third-party fact-checkers will be more prominently labeled on both Facebook and Instagram, enabling people to make better decisions on what to read, trust and share.

Labels will be placed atop photos and videos in those posts, as well as on top of Stories content on Instagram, and they will link to the fact-checker’s assessment.

A new pop-up is coming to Instagram, appearing when people attempt to share posts on the Facebook-owned photo- and video-sharing network that include content that was debunked by third-party fact-checkers.

Facebook

And when Facebook receives signals that a piece of content is false, that content’s distribution is reduced pending investigation by a third-party fact-checker.

Facebook implemented a policy that bans paid advertising suggesting that voting is useless or meaningless, or imploring people to not vote, and the social network said its machine learning systems are more effective at proactively detecting and removing voter suppression content.

The social network said in its Newsroom post, “We are also continuing to expand and develop our partnerships to provide expertise on trends in voter suppression and intimidation, as well as early detection of violating content. This includes working directly with secretaries of state and election directors to address localized voter suppression that may only be occurring in a single state or district. This work will be supported by our Elections Operations Center during both the primary and general elections.”

The social network introduced steps to enable journalists, lawmakers and researchers to better analyze political ads on its platform.

Leathern said during the call, “We are announcing the ability to programmatically download all of the ad creative and making it easier for people analyzing this to have scripts to help them do so. There will be unique IDs for each ad in (the Ad Library API [application-programming interface] and Ad Library, and we will be updating daily.”

Facebook said it was updating Ad Library, Ad Library Report and the Ad Library API and adding “useful” API filters to provide programmatic access to download ad creatives, as well as a repository of frequently used API scripts, adding that in November, it will begin testing a new database that would enable researchers to download the entire Ad Library, pull daily snapshots and track day-to-day changes.

It will also be clearer if an ad ran on Facebook, Instagram, Messenger or mobile ad network Facebook Audience Network.

The social network’s new U.S. presidential candidate spend tracker will enable people to see how much candidates have spent on ads on its platform, and additional spend details are being added at the state and regional levels.

In the wake of another round of removals for coordinated inauthentic behavior, this time originating in Iran and Russia, Gleicher outlined changes to the social network’s policies on inauthentic behavior in a separate Newsroom post.

He said during the call, “We are updating our inauthentic behavior policy to clarify how we enforce against the spectrum of deceptive practices we see on our platform, foreign or domestic, state or non-state.”

Gleicher wrote in the Newsroom post that Facebook will continue to search for groups of accounts and pages that are working together to mislead people about who they are and what they are doing, and all accounts (inauthentic and account), pages and groups involved in this activity will be removed.

He added that if a particular organization is discovered to have been organized primarily to conduct manipulation campaigns, that organization will be permanently removed from Facebook’s platforms across the board.

In cases where governments resort to coordinated inauthentic behavior to either target their own citizens or to manipulate public debate in other countries, “we will apply the broadest enforcement measures, including the removal of every on-platform property connected to the operation itself and the people and organizations behind it,” Gleicher wrote. “We will also announce the removal of this activity at the time of enforcement.”

As part of its ongoing efforts to prevent the spread of misinformation by better equipping people to spot it on their own, Facebook will invest $2 million to support projects along those lines.

Facebook said in its Newsroom post, “These projects range from training programs to help ensure that the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters.”

The social network is also adding a new series of media literacy lessons to its Digital Literacy Library.

Finally, the social network updated its policy to reflect other types of inauthentic behavior on its platform, including spam and fake engagement.

Zuckerberg mentioned during the call that one of the conclusions of a report issued by the Senate Intelligence Committee last week is that maintaining the integrity of elections will require continued cooperation between the public and private sectors.

He concluded, in response to a reporter’s question: “Overall, I’m confident that we’re a lot more prepared (than in 2016). I also know for a fact that more nation states are more sophisticated in their attacks, and they ‘re going to continue doing this. This isn’t an area where we can take our eye off the ball, or where you ever fully solve the problem.”


david.cohen@adweek.com David Cohen is editor of Adweek's Social Pro Daily.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}