A certain chicken sandwich touched off the Great Twitter War of 2019 over the summer. Pair that energy with the criticism that some influencers can garner by posting sponsored content—or sponcon—to their social feeds, and it’s not difficult to understand why brands would want those negative reactions or spam comments removed from their paid advertisements.
That’s the idea behind The Mod, a platform management tool created by Colorado-based brand safety firm Respondology. The Ad Mod is the next iteration for the platform, specifically designed for paid social advertising.
Horizon Media is the first partner to sign on to The Mod, which helped the company manage the launch of the Popeyes Chicken Sandwich by monitoring comments across the YouTube, Instagram and Facebook accounts of the pro athletes hired to promote the new menu item.
For example, an Instagram post by Tim Anderson, a Chicago White Sox shortstop, posted, has a lot of comments about “bae” and heart-eye emojis, but not criticism of the post’s content, which is him posing with the sandwich.
“Realistically, brand safety is something that’s not something we will ever completely solve. We’re always looking for the next solution to improve it,” said Rafe Oakes, manager of social influence at Horizon Media. “With every single campaign we do, we have to make sure we’re creating content in an environment that is brand safe.”
The campaign, running with The Mod’s technology, saw a 10.8% engagement rate, according to Horizon Media.
The Ad Mod is currently in beta mode with Horizon. Once fully launched, brands will be able to access a dashboard and see which comments are being flagged by Respondology’s mix of AI and human monitoring. It operates across social media platforms using a proprietary API.
“These companies don’t want to be in the game of what’s censored on their platforms,” said Erik Swain, president of Respondology and COO of Boulder Heavy Industries. “They see us as a great benefit because we’re taking a problem for them [that] they don’t want to be the one to solve.”
Representatives for Google, which owns YouTube, and Facebook (which also owns Instagram), did not immediately return requests for comment.
The Ad Mod first screens comments with machine learning, which automatically removes remarks that include certain keywords predetermined by the brand, as well as racist, sexual and violent remarks, Swain said.
Comments that might require a more nuanced approach to understand are then flagged to the company’s group of U.S.-based human reviewers, who then screen them individually to determine whether they’re appropriate. These screeners use an app that works much like Tinder, swiping left or right to deem the comment appropriate or not, and usually work as part-time contractors.
Once removed, the comments still appear on the feeds of those who posted them. That way, the same user won’t continue to post inappropriate comments or otherwise escalate the situation if they notice that their comment was removed.
“It’s technology for good,” Swain said. “It’s the natural next step.”