Facebook Is Making Its Biggest Play to Improve Brand Safety, but Is It Enough to Gain Marketers’ Trust?

New tools detail transparency and set guidelines for creators

For the past year, Facebook has been dogged by a stream of concerns from marketers about measurement and transparency issues plaguing the platform’s ad business while other brand safety questions swirl. Meanwhile, two incidents last week—a report that Facebook reaches more Americans than U.S. census data shows and news that fake Russian accounts purchased $100,000 of ads between June 2015 and May 2017—don’t help marketers’ lingering trust issues with the platform.

Now, Facebook is rolling out a few tools that it hopes will make advertisers more comfortable and clear about where their ads are running. As Adweek reported last week ahead of this week’s Dmexco conference in Cologne, Germany, the company has new brand safety tools and a set of guidelines detailing which users—namely publishers and creators—can make money off of content posted to the platform as part of its revenue sharing program. The moves come at a critical time when advertisers are not only demanding more insight into how the so-called walled garden operates, but also have broader concerns about controlling how digital ads are served and measured.

“This is an area where you’re going to see us make ongoing progress on and ultimately we care deeply about the health of the ecosystem on our platform­—that includes publishers, our consumers that use our products and advertisers,” said Carolyn Everson, Facebook’s vp of global marketing solutions. “We want to ensure that advertisers feel confident in their investment on our platform and brand safety and what content ads are running against has been an area of concern.”

Money-making posts

Brand safety has been top of mind since hundreds of advertisers yanked or froze their YouTube ads once it was discovered that their ads were running next to objectionable content that promoted racism or terrorism. In response, YouTube began requiring that channels amass 10,000 views before they can be ad-supported. Facebook’s new guidelines are meant to address similar concerns about the context around where ads run, though they are not as tied to hard numbers like YouTube’s program.

Facebook’s guidelines address its revenue sharing program that pays creators in exchange for posting in-stream videos and fast-loading Instant Articles pages. Last month, the site launched its long-anticipated foray into video called Facebook Watch and has also tested placing ad breaks within videos that give 55 percent of ad revenue to creators while Facebook gets 45 percent of ad revenue.

“If you think about the way that YouTube did it with requiring a sizable community [for advertisers]—this brings it to that level of creating the credibility that I would think of similarly with how [Facebook] used to certify Pages,” said Jessica Richards, evp and managing director of Havas Media’s Socialyse. “It’s almost a certification for the types of content that’s being created but then applied with an additional measure for them to vet the content before it’s going up.”

The new guidelines detail which creators are eligible to participate and what content is appropriate to be supported by advertising. In addition to complying with the company’s terms and policies, “creators and publishers must have an authentic, established presence on Facebook—they are who they represent themselves to be and have had a profile or Page on Facebook over a sufficient period of time,” read the new guidelines. More specifically, a sufficient period of time is equivalent to one month, according to Facebook.

Facebook has tested pre-campaign tools with a handful of agencies including WPP-owned GroupM that show advertisers where their ads may appear. Facebook has provided WPP’s GroupM with a list of 2,700 Pages deemed brand safe for in-stream video ads, like big-name publishers BuzzFeed, Business Insider, Billboard magazine and MLB Advanced Media. From there, GroupM can choose to block ads from appearing on any of those Pages.

“Most of them you can tick down the list and say, ‘OK, usually we would buy content from folks like this,” said Joe Barone, managing partner of brand safety Americas at GroupM. “The tough job relative to blacklisting is that it’s ongoing—nothing is ever static. That list of 2,700 content producers is certainly not intended to be static.”

On top of that list, GroupM maintains its own list of 250,000 domains that either pirate content from legitimate sources or run counterfeit websites and the company is looking for an additional tech company to find any overlap on the two lists to better evaluate brand safety.

“We have block lists that are mostly specific to piracy and that list is about 250,000 domains and [the] domains don’t match up directly to Facebook Pages, so we’re looking for a third-party partner to do the match,” Barone said. “You can imagine that’s a big spreadsheet to maintain.”

In one example, to qualify for ad breaks, creators and publishers must have 2,000 or more followers (which represent a “significant follower base,” per Facebook’s wording) and have live videos that recently reached 300 or more concurrent viewers, according to the social network. Facebook didn’t detail qualifications for other revenue-generating features but said that the qualifications “will likely vary.”

GroupM’s Barone noted that 2,000 followers is a minimum requirement and should only be one factor that brands use to determine whether a content creator is legitimate.

“In and of itself, a minimum threshold is not enough of a standard because you could have content that is either offensive to large groups of people or not appropriate to support a particular brand that has 2,000 followers,” he said. “It’s one element that starts to get to the seriousness of the effort that’s being put forward by the content producer but certainly it’s not something that we would rely on in a vacuum.”

Sensitive content

While brands had been running ads on YouTube for years, many were not aware of brand safety issues before specific examples started popping up in the press. Media buyers’ attention is now turning to Facebook for similar reasons.

“Brand safety [on Facebook] hasn’t been a problem because people don’t really know where their ads are running,” Eli Chapman, vp and managing director of connections planning at R/GA explained. “From what we saw with YouTube, as soon as [advertisers] become aware that there’s more risk, they’re going to react strongly.”

In response to the ad buyers’ concerns, Facebook is upping the number of categories of “sensitive” content and will automatically remove certain types of content from its ad-supported inventory.

In April, Facebook opened up five categories where advertisers could choose to opt-out of showing ads within: Tragedy and conflict, mature, debatable social issues, gambling and dating.

Those categories have been expanded to include nine types of content and ads will now automatically be removed from them, meaning that advertisers will no longer need to manually remove them from ad buys. Content deemed ‘mature’ has been broken out into more specific categories including violent, adult, explicit, inappropriate, explicit, prohibited activity, drugs and alcohol and misappropriation of children’s characters. The categories apply to in-stream videos, Instant Articles and Facebook Audience Network (or FAN), which places ads on websites and apps outside of the social network. In May, Facebook announced that it was hiring 3,000 people to review content after videos containing suicide and murder spread on the platform.

“If there’s severe content in those nine standards, we know advertisers are not going to want to run in that so we just want to take the precaution of opting them out automatically,” Everson said. “In general, it makes the ecosystem safer, healthier, it makes it more transparent and accountable and those are really important things as we continue to try to lead the industry and how to make sure that the digital ecosystem is as safe and accountable as possible.”

Accounts that upload content violating the guidelines, post false news or misinformation or share clickbait can be declared ineligible to receive money for their content.

More granular stats

In June, Facebook began giving media buyers access to publisher lists that show which sites, apps, Instant Articles and videos their ads could appear in. From there, advertisers can create whitelists to remove specific publishers from their ad buys. That program is now officially rolling out to all media buyers next week.

More interestingly though is a new post-campaign tool that shows advertisers where their ads did in fact run, allowing advertisers to see whether their campaigns actually ran alongside controversial content. Facebook will provide these stats in the coming months, meaning that they will not be vetted or validated by a third-party company, which leaves some agencies uneasy.

“Facebook is so huge that advertisers feel the need to run on the platform, so any tool that can give a bit more feeling of control will be well-received,” explained Jessica McGlory, associate director and paid social lead at Engine Media. “It is still a Facebook product though which could make some advertisers weary.”

Or as GroupM’s Barone described the post-reporting tool, “it will be Facebook telling us that this is where your ads ran,” he said. “It’s not like an ad server report, a Moat report or a DoubleVerify report where we have an independent third-party corroboration.”

That said, GSD&M’s media supervisor of social, Evan Walker, expects brands will be able to plug into third-party brand safety vendors like they already do with third-party viewability partners in the coming months.

“Facebook should be treated like a network buy, as surrounding content is never predictable,” he said. “Facebook has taken positive steps to integrate with third-party viewability partners, so I expect advertisers will soon be able to leverage brand-safety vendors as well.”

Lingering trust

While Facebook is forking over more data that advertisers haven’t previously been privy to see, such data has already been available through ad-tech and digital companies for several years, said R/GA’s Chapman. “It’s nice to see Facebook starting to catch up with where transparency was a few years ago,” he said.

Kyle Bunch, head of social strategy at R/GA added that Facebook has a responsibility to provide more transparency than other digital players because of its massive clout and missteps.

“They were really good at scale and and audience verification and having so much data at their disposal,” he said. “But now I start to call into question scale because I don’t trust the numbers [and] I start to call into question the audience identification—are you showing me the right audience and setting me up for a meaningful engagement. They have a real reason to embrace a much greater level of transparency. I hope we’ll start to see more.”

Plus, Bunch said that underlying concerns about Facebook’s data will make the platform’s brand safety pitch tough.

“When you’re building from a place where you’ve had struggles as far as accuracy and reporting, how confident can you feel that the brand safety report that you’re getting is accurate?” Bunch said. “There’s trust to be rebuilt and the question is how far Facebook is willing to go or how big they see the problem and what they’ll do.”