When it comes to the “filter bubble” that’s at the top of everyone’s minds with the current news cycle, the question that often gets asked is, “How did this happen?”
We use social media so frequently and so casually in our everyday lives that it’s easy to forget that these platforms are tools and businesses in their own right. And businesses want to give their consumers the least stressful and most enjoyable experience possible—or, in the case of Facebook and Twitter, what they think their consumers will want as the most enjoyable experience.
In order to keep consumers engaged with their content, algorithms have continued to give them more and more of the things that they engaged with, based on the assumption that they engaged with it because they enjoyed it or agreed with it, without taking into consideration that people are complex and can, at times, want content from different sides.
When it came to the 2016 election season, this meant giving consumers mostly content from viewpoints that they agreed with and filtering out opposing or critical content because the algorithm assumed that it wasn’t what people wanted to see.
Hillary Clinton supporters believed Clinton was winning the election, with no competition, because all the news they saw negated or ignored any valid criticisms. Donald Trump supporters believed their candidate was going to take the presidency because all the news and articles they saw in their feeds told them as such—with no regard for critique or flaws.
Thus, the filter bubble was amplified to new heights, and American voters on both sides were genuinely shocked by the election results because they had only heard good things about their candidate and bad things about the opposition.
This filter bubble isn’t new: It started well before the 2016 election cycle. Stories about Barack Obama being a “bad president” could then lead consumers down a rabbit hole, resulting in them receiving content about how the former president “wasn’t born in the U.S.,” for example.
Some consumers are OK with their digital lives being in this bubble, some want to actively see both sides and some don’t realize the bubble is happening. There’s a power in consciously choosing the bubble versus having it created for you.
Maybe, if presented with the opportunity to pick it a different way, the consumer would. But maybe without that choice, the consumer won’t. It should be their choice and fully up to them. They should be allowed to choose what’s best for themselves. The algorithm shouldn’t dictate it automatically, but that doesn’t mean it can’t assist.
In order to facilitate these conscious choices, platforms should aim to not only give correlating content or information, but to provide popular, generic and, yes, even opposing information. This allows for as full a world view on a digital platform as the consumer wants to create for themselves.
This is one of the many reasons why control of the internet should be in the hands of real humans and not solely rely on algorithms or artificial intelligence. As tools in conjunction with human-curated content, programming techniques like providing more of what a user engages with can be successful. But there should be an individual choice component in conjunction with the algorithm in order to ensure that the user experience isn’t tainted and that ultimately, they control what they see.
This pairing of the human element with technology can help to facilitate the popping of the filter bubble and put the control of the internet back in the hands of the people using it, where it belongs.
Arvind Raichur is co-founder and CEO of MrOwl, an Internet startup company that allows people to discover, create, collaborate, and share their interests to take control of their Internet.