Moderating User Generated Content on Social Media is a Catch-22

Social networks seem to be taking an increasingly proactive in their efforts to manage undesired content but at what cost?

Moderating user behavior on social networks is a problem with no simple resolution. Users try pressure campaigns, executives lay down the law in user agreements and politicians try to place blame and responsibility on social networks. Despite the host of challenges, social networks are becoming much more proactive in their attempts at moderation, but at what cost?

Last week reports, emerged that government officials are planning to meet with the leaders of top social networks and tech companies to discuss moderation on their networks and services. The focus of the meetings, according to CNN Money, is to discuss the response of these companies to terrorism recruitment efforts using their services.

Some networks are already actively moderating user-generated content. For instance, Twitter has banned thousands of accounts associated with ISIS. But it is unclear whether they are doing enough to remove these posts, and enforce their terms of service.

A tension between social networks and the government still exists in the wake of PRISM, and other surveillance techniques that occurred without the knowledge of the networks. The response from networks was to implement stronger encryption and lock the government out, which left law enforcement complaining that surveillance has become more difficult.

According to CNN Money’s anonymous source, encryption and the threat to national security brought by hosting terrorist related content, will form the core of the meetings.

One of the main challenges with trying to resolve these issues is the open nature of communication on social networks. Twitter, Facebook, and other networks generate revenue directly from hosting content, and few simple technological solutions seem available to combat terrorist associated content.

Screening posts, creating automated content crawlers, or shadowbanning users before a final decision could be made are all possibilities that might be in direct conflict with free speech. Simply blocking ISIS hashtags could result in genuinely newsworthy content being removed from sites. And inserting back doors into security protocols could undermine the whole system of encryption companies have been working towards in recent years.

Strong security and free expression will always be at odds, whether on the streets or on social networks. We’ve seen how controversial moderation attempts can be, and they’ll likely be even more controversial if and when the government gets involved.