Tech Platforms’ Policies Come Under the Microscope Following New Zealand Terrorist Attack

The shooting at two Mosques in the country was livestreamed and shared across multiple platforms

This isn't the first time major social platforms have come under fire for graphic imagery slipping through the cracks. Getty Images

A gunman opened fire in two mosques in New Zealand on Friday, killing 49 people and injuring dozens more. The suspect broadcasted portions of the horrific shooting on social media during the attack.

The terrorist attack, which police say was carried out by a young Australian-born man who expressed hatred toward Muslims and immigrants in a manifesto that was spread virally on sites like Facebook, Twitter and YouTube before the shooting, is drawing scrutiny to the ways tech companies respond to depraved and graphic content broadcast and shared on their platforms.

A 17 minute-long video, which depicts people being murdered from a first-person video perspective, was livestreamed to Facebook and uploaded to Twitter and YouTube after being passed around on 8chan, a fringe site where far-right and extremist content often proliferates. After the original videos were removed by the platforms, new uploads of the video continued to spread on platforms like Facebook, Twitter, YouTube and Reddit where users discussed and linked to both the videos and the manifesto.

The Washington Post reported on Friday that videos of the shooting continued to be uploaded to YouTube eight hours after the massacre and that a search for terms like “New Zealand” on the site resulted in “a long list of videos, many of which were uncensored and extended cuts” of the shooting. YouTube did not respond to questions about why it took so long to take down the videos.

In statements on Friday, Twitter and YouTube expressed their sadness at the shooting but were reticent to explain in detail whether their process for identifying and pulling down the videos on Friday differed from its normal review process.

“Our hearts go out to the victims of this terrible tragedy,” a spokesperson for YouTube said in a statement. “Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube.”

A Twitter spokesperson said the company was “deeply saddened” by the shootings and said that “Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this.”

Reddit, where users were discussing and sharing the video, said it was “actively monitoring” the situation and was removing content containing links to the video stream or to the manifesto “in accordance with our site-wide policy,” which prohibits the glorification or incitement of violence. Reddit also removed two violent subreddits where the video was being shared.

Mia Garlick, Facebook’s Australia-New Zealand policy director, said in a statement, “Our hearts go out to the victims, their families and the community affected by this horrendous act. New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced, and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continue.”

This is far from the first time that tech platforms have hosted violent and graphic content, sometimes from terrorists. Tech platforms have said they are doing their best to curtail violent content, and some tech giants have faced advertiser boycotts after ads ran before extremist content.

Facebook, Twitter, YouTube and Reddit all have community guidelines and policies that prohibit certain graphic and violent content from being uploaded to the platform. But having those policies in place doesn’t mean people will follow them, something that was evident on Friday as the companies fought to flag and take down content even as accounts uploaded new videos over and over.

In the wake of an event like Friday’s livestreamed video, tech companies use similar strategies to identify and pull down violent extremist content. YouTube says it uses smart detection technology to flag content for further human review by its content moderation teams and encourages YouTube users to flag content that might violate its guidelines. YouTube also said that the majority of violent extremist content uploaded to its platform is detected by automated filters and removed, on average, before the video hits 10 views. The company made a number of changes to its site to cut off terrorist content on the site and recently changed its recommendation algorithm after being scrutinized for pushing users toward extremist content.

As of Friday morning, YouTube said it had pulled “thousands” of videos related to the terrorist attack.

Twitter and Facebook take a similar approach, using a combination of technological tools and human content moderators to flag and remove offending content as well as relying on users to flag content for review. Both companies suspended the account that uploaded the original video and said they are taking a proactive approach to removing more content about the shooting.

Reddit also uses a combination of human reviewers and machine learning to flag offending posts. The company said that links to the video or to the manifesto are being automatically removed. Like the other platforms, Reddit declined to share details about its rapid-response process, which it has in place for situations like Friday’s terrorist attack.

The approaches indicate the challenges tech companies face, including relying on users to flag content if it slips through automated filters and the time it takes for human moderators to review and delete offending content. Reddit’s approach of using machine learning to flag links to the offending content also shows a challenge: If one video is removed and another is uploaded, how fast is Reddit able to update its automated systems to flag links to newly-uploaded videos?

There are questions, too, about the human cost of reviewing and removing the horrific videos. Twitter, Reddit, YouTube and Facebook all did not answer questions about whether the platforms were offering additional mental health resources to its armies of human moderators tasked Friday with viewing the video and determining whether it needed to be removed.

Josh Sternberg, David Cohen and Shoshana Wodinsky contributed to this report.


@kelseymsutton kelsey.sutton@adweek.com Kelsey Sutton is the streaming editor at Adweek, where she covers the business of streaming television.
{"taxonomy":"","sortby":"","label":"","shouldShow":""}