The Burden of Content, Part 1: Are Social Networks Responsible for Content?

Opinion: This quandary isn’t just about free speech

Who is responsible for controlling the trolls?
Planet Flem/iStock

How much attention and freedom should be given to the angry rants of internet trolls, to the ideas spread by conspiracy theorists and to the false or misleading news stories created by those who want to forward their own agenda? It all depends on whom you ask.

Some argue that “free speech” means anything goes online, and that the individuals and entities creating this content have just as much right to express their opinions and distribute those ideas as anyone else.

Others argue that coverage of unsavory topics and stories in some ways legitimizes those thoughts, helping to further spread harmful or hateful ideas, even if the intent is the opposite.

Then there are the platforms where some of these angry comments, controversial positions or outright lies occur. It’s one thing for a person or organization to buy a domain name and create content that many be considered hurtful and offensive, but do they have the right to post that content on a platform they don’t own, such as Twitter, YouTube or Facebook?

The rules for posting on social media

Social networks do have some rules and regulations when it comes to what people can and can’t post.

Facebook has a set of community standards, according to which it reserves the right to take down content that might be objectionable, illegal or otherwise problematic.

Twitter outlines the type of content that’s verboten on the platform, but it also points out that use of the platform is “at your own risk. We do not endorse, support, represent or guarantee the completeness, truthfulness, accuracy or reliability of any content or communications posted via the services or endorse any opinions expressed via the services.”

While disclaimers of this nature are designed to cover the platforms legally, there is much more than a content-related lawsuit at stake. Social media platforms rely on advertising in order to monetize their platforms, and online advertising dollars flow to brand-safe destinations where people gather. If users find that misinformation or trolls have ruined the social experience and begin to leave a platform, or if brands no longer believe that their ad placements are in a safe place, the monetary implications for the social networks are real.

This quandary isn’t just about free speech on social platforms: It is about the long-term financial viability of these platforms if they can’t create the right environment for both users and advertisers.

What have social networks done to police content?

While sexual content is relatively easy to identify and filter, the policing of news and ideas is a much more complex and politically charged challenge. Social networks face an uphill battle when it comes to monitoring content and weeding out all the false, offensive or harmful stuff while letting the valuable content alone.

That is because someone ultimately has to make that decision. What is true? What is misleading? What is offensive? Unfortunately, there doesn’t seem to be a universal truth accepted by 100 percent of the global population, even when the evidence is overwhelming in nature.

Don’t get me started on #FlatEarthers—we can’t seem to agree on facts, let alone opinions.

At the same time, we must also recognize that at one time, the majority of people believed that the world was flat. Even “science” changes over time, and new or contrarian ideas may not always be wrong just because they aren’t currently popular or accepted as the truth.

I don’t envy the challenges the social networks are facing. This is a very difficult technological problem to solve, let alone the political and ethical implications of using that technology.

When it comes to what social networks do to monitor content and limit the stream of fake news, hateful posts and fake accounts, some would argue that it’s not enough. A recent example can be seen in the case of Alex Jones.