Your favorite social networks, online games, apps and those used by your children are home to an alarming and growing number of in-network demeaning, destructive and even life-threatening behaviors. While some networks are investing in human and technological resources that impede these behaviors, most aren’t doing nearly enough. What’s worse, brands in part subsidize this activity. But at what human, platform and brand costs?
Even though brands don’t intentionally set out to underwrite such behavior, they often appear alongside questionable and even dangerous content. Brand safety company Cheq said ads that appear near negative content decrease consumer intent to associate with a brand by 2.8 times. In February 2019, for example, Disney and Nestle reportedly pulled advertising from YouTube after it was discovered that their ads were shown next to videos used to facilitate a “soft-core pedophilia ring.” According to an April 2019 AdColony report, nearly half of consumers said their view of an advertiser is negatively affected when an ad appears alongside undesirable content.
Embrace governance and infrastructure
In the physical world, event spaces, buildings, bars and restaurants, stadiums, parks and schools have boundaries. The owners and operators of such spaces have an obligation to reasonably protect people on their premises. If someone gets injured, they must prove that was a reasonably foreseeable result based on the operator’s actions or inactions. Social networks should be held to similar standards.
In January 2019, the House of Commons’ Science and Technology Committee published a report arguing for “legislation as social media companies having a ‘duty of care’ towards its users who are under 18. If that person does not take care, and someone comes to a harm identified in the relevant regime as a result, there are legal consequences, primarily through a regulatory scheme but also with the option of personal legal redress.”
Become the hero
While governments and platforms explore solutions at a sluggish pace, the conversation surrounding duty should expand to include the brands that underwrite digital discourse and engagement.
At a 2020 CES Mega Trends event hosted by Brand Innovators, Mastercard CMO Raja Rajamannar shared how the company embraced new marketing roles in brand safety and risk management. Rajamannar, who famously froze Mastercard’s spending with YouTube in 2017 following a massive brand safety scandal, is aiming beyond just protecting the brand’s reputation against negative recall. He believes it’s about societal safety and that brands must align to pressure platforms to also take action.
Rajamannar is also the newest president of the World Federation of Advertisers. His mandate for the WFA is “marketing for a better society, marketers for a more equal society and marketers for a better web.”
He also believes social media channels own the accountability and responsibility for societal safety, and in his work at WFA and Mastercard, he’s asking other marketing leaders to join him.
Design safely for a more lucrative platform
Carlos Figueiredo of Two Hat, an AI-powered solution aimed at keeping online communities safe by identifying and removing harmful user-generated content in real time, is a big supporter of efforts like those Rajamannar is leading.
Figueiredo suggests brands and platforms become purveyors of a “safety by design” approach to change the economy of online communities. Safety by design changes the way network operators think about platform redesign and enhancements and also the way they build new products. With user and brand safety in mind, everyone wins.
Today, platforms aren’t incentivized by safety by design, but by growth.
“Brands that wish to be at the vanguard of protecting children … should consider adopting the same principle when planning the user’s experience with their brand,” Figueiredo writes.
Perhaps brand leaders such as Mastercard, WFA and others will finally influence transformation to deliver happier, healthier and more productive user, platform and brand experiences.