5 Ways Tech Platforms Can Prevent Hate Speech

There’s a lot of good to be found with technology

Search engines and social media platforms have amassed the power to influence elections, divide countries and even cost people their lives. For example, Dylann Roof’s Google search yielded false information about black on white crime statistics, influencing him to attack a primarily black church.

On the other hand, technology can also serve a valuable purpose. Social media can help liberate nations, reunite families and bring us together for causes that can instead save lives. With all this power comes great responsibility for platforms to use their influence toward promoting diversity and preventing the spread of hate.

Tech giants have already started to prioritize social good by partnering to fight opioid addiction. Google put a ban on Payday Loans, cracked down on the search engine-based locksmith scam and even applied new regulations on Google Ads for addiction treatment centers. But there is still much more to be done. By tapping into existing skill sets marketers already use to improve visibility on search engines and social media, they can help suppress hate online.

Human moderators

Facebook’s removal of human editors hurt the tech giant’s ability to find content that spreads hate and disinformation. Adding a layer of human moderators trained to spot fake stories and hate in various time zones and languages can help. This can heighten awareness of controversial content and create room for more beneficial stories.

Vet common SEO practices

Being in control of your content is crucial. According to The Guardian, hate sites that link to each other cause them to rank higher for offensive concepts and phrases, like “Are women evil?” Google’s algorithm tracks how websites are linked and rewards websites that have high-quality links with more domain authority, allowing them to show up higher in a search ranking.

Google’s algorithm also penalizes websites that have link schemes and link to “bad neighborhoods.” This means if you link to or get links from suspicious websites, your site is guilty by association and likely to lose visibility on search engines. Search engines should apply this practice to hate speech to ensure the quality of links and curb the popularity of sites that support oppression.

Disavow tool

Google could also enhance a tool already in its possession. Google’s existing Disavow tool allows websites to request that Google not consider low-quality links in their ranking if the links are out of their control and harming SEO rankings. In a similar fashion, this new tool could allow websites to block links from hateful sites that cause a negative effect by association. Users could then submit hate sites for potential blacklisting, coupled with human moderators. This “Disavow Hate” tool could help ensure content promoted online is not aiding in spreading harmful messages.

Chatbot librarians

Like a real librarian who can help sort and categorize information, search engines should add a chatbot librarian that shows opposing views and cites facts from trusted sources. Video platform YouTube is attempting to fight dangerous conspiracy theories by using Wikipedia as a trusted source, yet even Wikipedia itself feels hesitant with this move. Instead, trusted sources should go through a vetting process, like Google is conducting for riskier content like addiction sites.

Chatbot librarians can enhance their skills in time and offer context and additional facts that would be valuable for the user to know. The librarian could function like Google’s “People Also Searched” feature, but instead operate as a recommendation engine suggesting additional information about a topic based on previous recommendations that other people found useful. Users could also have the option to turn the chatbot on and off as needed and flag content if it slips through.

Warning labels

Search engines should add warning labels for hateful content, letting users know what to expect before clicking on a website. Google already does this by labeling accelerated mobile pages (AMP) stories. YouTube issues a warning label for sensitive content, browsers like Chrome alert you when navigating to an unsafe website and Gmail also flags when an email is deemed suspicious or spam. Labels like these can inform users of potential danger and give them a choice to proceed or steer clear.

Existing technologies and search engines hold the power to help combat issues like racism and gender inequality. According to The New York Times, a British parliamentary committee just released a report echoing that tech platforms should do more to regulate harmful content. In turn, these platforms can facilitate more positive conversations at a time when they’re needed most and stop the spread of hate online, leaving more room for love, hope and inspiration.