Hedging Advertiser Risk on User Generated Content

One of the most significant challenges facing advertisers is the lack of control of what content their advertisements are displayed next to when advertising on social networks. Yesterday while attending the Digital Media Conference in Tyson Corner, VA, I listened to Lynda Clarizio, President of Platform-A, speak about systems the company is building to protect advertisers.

Through leveraging AOL’s parental controls technology Platform-A will have a tool which automatically determines whether or not there is offensive content on a page and will determine whether or not to display an advertisement. I spoke with Lynda and the communications director at AOL, both of which said that there was mention of this technology in their press release about their Bebo acquisition.

Apparently I glanced over that part of the release! This type of technology would prove to be extremely valuable. The only question is how effective this technology is. Ultimately, each advertiser has custom demands for what content their advertisements are not displayed next to. The complexities of such technologies have created a hurdle for most social networks to get through.

Last year Vodafone, Orange and Virgin each pulled advertisements from Facebook. Google also stated that they decided to cut ads from Orkut due to “complaints in Brazil about offensive content.” The issue is an industry wide problem and this filtering technology is the first I’ve heard of which moves in the right direction.

Whether or not the filtering technology works, social networks need to figure out ways to reduce the risks that advertisers face while placing their advertisements next to user generated content. Have you heard of any other technologies that help protect the advertisers?

Censorship photo