Brand Safety Concerns Come to Twitter as Ads Run on Profiles Selling Illegal Drugs

More than 20 unnamed brands were affected

The company has announced plans to reconsider its service following controversies over misleading or intentionally false content.
Getty Images

The brand safety issues that plagued YouTube and Facebook in recent years have now made their way to Twitter.

The 4A’s Advertiser Protection Bureau (APB), formed in April as an industrywide effort to address such issues, was alerted to an incident last week that saw sponsored tweets running on Twitter profiles created to promote the illegal sale of narcotics like Oxycodone. In some cases, paid tweets also appeared under search results for terms or hashtags related to such drugs.

Twitter has been placing ads within individual profiles since 2015.

One marketer with knowledge of the situation said that over 20 brands were affected, although the person declined to name them. He added that the drugs referenced by the hashtags included “everything you could imagine.”

"The notion is, if you see something, say something."
Louis Jones, evp, media and data, 4A's

A Twitter spokeswoman confirmed that the incident occurred but said it was quickly fixed once the company was informed.

“We recently determined that ads were being served on profiles that were selling restricted products, amounting to 450 impressions and $1.34 in spend,” the representative said in a statement. “Once we identified the issue, we immediately suspended the accounts in question and updated our systems. As we observe new behaviors attempting to get around the safeguards we have in place, we will continue to refine our tools to make Twitter a safe place for advertisers.”

“They should not be running ads in search results for illegal drugs,” the anonymous marketer said, adding that sponsored tweets appeared on “multiple” profiles that were “clearly [created to] make illegal drug sales.”

Adweek could not locate any of the offending placements this week. A quick search found ads from home-improvement retailer Lowe’s and a children’s hospital under misspellings of Oxycodone, but they were not adjacent to “unsafe” profiles or links promoting the sale of such substances.

Louis Jones, executive vice president of the 4A’s media and data practice, told Adweek that an employee at GroupM was the first to be alerted. That member shared the information across the Bureau so each agency could take steps to determine whether its own clients might be affected. “The notion is, if you see something, say something,” Jones said.

At least one marketer reported briefly pausing their company’s ad spend after the alert first went out. It is unclear whether any GroupM clients were affected, and no sources identified the entity that notified the employee. Jones described it as “a monitoring service.”

“By the time most people saw it and dug into it, Twitter had already resolved the issue,” Jones said. “The longer story here is the APB is trying to figure out what are the processes to put in place so we can stay on top of it and help prevent these things from happening in the future. … Our objective is to get brands out of unsafe places.”

"As we observe new behaviors attempting to get around the safeguards we have in place, we will continue to refine our tools to make Twitter a safe place for advertisers."
Twitter spokesperson

This incident comes as Twitter moves to balance free speech issues with the concerns of advertisers and everyday users. The platform recently followed Apple, YouTube and Facebook in censoring far-right conspiracy theorist and distributor of misinformation Alex Jones, but some activists have launched pressure campaigns targeting advertisers and pushing Twitter to ban Jones altogether. Earlier this week, CEO Jack Dorsey told The Washington Post that his company is considering additional steps to help users “make judgments for themselves.”

“All scaled platforms generally think macro but occasionally need to act micro,” said Marc Goldberg, CEO of ad tech company Trust Metrics, adding that the Jones controversy had “created a dialogue for platforms to think beyond just the algorithm. There will always be those who specifically work to reverse-engineer algorithms and find loopholes, so there’s an importance for human review and intervention.”

Recommended articles