The Global Alliance for Responsible Media revealed early Wednesday morning that Facebook, Twitter and YouTube agreed to adopt a common set of definitions covering hate speech and other harmful content, as well as to work together in monitoring industry efforts.
YouTube vice president of global solutions Debbie Weinstein said in an email, “Responsibility is our No. 1 priority, and we are committed to working with the industry to build a more sustainable and healthy digital ecosystem for everyone. We have been actively engaged with GARM since its inception to help develop industrywide standards for how to commonly address content that is not suitable for advertising. We’re excited to have reached this important milestone.”
The announcement Wednesday follows 15 months of “intensive talks” within GARM between major advertisers, agencies and key global platforms.
Four key areas for action were identified.
- Common definitions for hateful content: GARM said these currently vary by platform, making it difficult for brand owners to make informed decisions on where their ads are placed and to hold platforms accountable. It has been working on crafting common definitions since last November, and more depth and breadth on specific types of harmful content—hate speech, aggression, bullying—has been added.
- Reporting standards: Each platform has its own methodologies for measuring the occurrence of harmful content, and GARM said having a harmonized reporting framework was a key step toward ensuring the policies are enforced effectively. Metrics on issues such as advertiser safety, consumer safety and platform effectiveness in addressing harmful content have been agreed upon, and the system is slated to be rolled out in the second half of next year.
- Independent oversight: GARM said agencies, brands and platforms need an independent view on how harmful content is being categorized, eliminated and reported, and the goal is to have all major platforms at least in the auditing process, if not fully audited, by year-end.
- Advertising adjacency solutions: GARM stressed that brands must have the ability to see when their ads appear next to harmful or unsuitable content and to quickly take corrective measures when that occurs, adding that it expects platforms that don’t already have adjacency solutions in place to provide development roadmaps during the fourth quarter. Pinterest, Snap Inc. and TikTok joined Facebook, Twitter and YouTube in issuing firm commitments on this front.
WFA CEO Stephan Loerke said in a statement, “The issue of harmful content online has become one of the challenges of our generation. As funders of the online ecosystem, advertisers have a critical role to play in driving positive change, and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements. A safer social media environment will provide huge benefits not just for advertisers and society, but also to the platforms themselves.”
Mastercard chief marketing officer and WFA president Raja Rajamannar added, “We are delighted that GARM has made such significant progress in such a short period of time. I know these discussions have not been easy, but these solutions, when implemented, will offer more choice and control for advertisers and their agencies by supporting content that aligns with their values.”
Unilever executive vice president of global media Luis Di Como said, “This is a significant milestone in the journey to rebuild trust online. Unilever has long championed a responsible and safe online environment through Unilever’s responsibility framework and, as founding members of GARM, we are encouraged by the acceleration and focus to come together as an industry and agree on these four key areas of action. The issues within the online ecosystem are complicated, and whilst change doesn’t happen overnight, today marks an important step in the right direction.”