Tech Platforms’ Policies Come Under the Microscope Following New Zealand Terrorist Attack

The shooting at two Mosques in the country was livestreamed and shared across multiple platforms

This isn't the first time major social platforms have come under fire for graphic imagery slipping through the cracks.
Getty Images

A gunman opened fire in two mosques in New Zealand on Friday, killing 49 people and injuring dozens more. The suspect broadcasted portions of the horrific shooting on social media during the attack.

The terrorist attack, which police say was carried out by a young Australian-born man who expressed hatred toward Muslims and immigrants in a manifesto that was spread virally on sites like Facebook, Twitter and YouTube before the shooting, is drawing scrutiny to the ways tech companies respond to depraved and graphic content broadcast and shared on their platforms.

A 17 minute-long video, which depicts people being murdered from a first-person video perspective, was livestreamed to Facebook and uploaded to Twitter and YouTube after being passed around on 8chan, a fringe site where far-right and extremist content often proliferates. After the original videos were removed by the platforms, new uploads of the video continued to spread on platforms like Facebook, Twitter, YouTube and Reddit where users discussed and linked to both the videos and the manifesto.

The Washington Post reported on Friday that videos of the shooting continued to be uploaded to YouTube eight hours after the massacre and that a search for terms like “New Zealand” on the site resulted in “a long list of videos, many of which were uncensored and extended cuts” of the shooting. YouTube did not respond to questions about why it took so long to take down the videos.

In statements on Friday, Twitter and YouTube expressed their sadness at the shooting but were reticent to explain in detail whether their process for identifying and pulling down the videos on Friday differed from its normal review process.

“Our hearts go out to the victims of this terrible tragedy,” a spokesperson for YouTube said in a statement. “Shocking, violent and graphic content has no place on our platforms, and we are employing our technology and human resources to quickly review and remove any and all such violative content on YouTube.”

A Twitter spokesperson said the company was “deeply saddened” by the shootings and said that “Twitter has rigorous processes and a dedicated team in place for managing exigent and emergency situations such as this.”

Reddit, where users were discussing and sharing the video, said it was “actively monitoring” the situation and was removing content containing links to the video stream or to the manifesto “in accordance with our site-wide policy,” which prohibits the glorification or incitement of violence. Reddit also removed two violent subreddits where the video was being shared.

Mia Garlick, Facebook’s Australia-New Zealand policy director, said in a statement, “Our hearts go out to the victims, their families and the community affected by this horrendous act. New Zealand Police alerted us to a video on Facebook shortly after the livestream commenced, and we quickly removed both the shooter’s Facebook and Instagram accounts and the video. We’re also removing any praise or support for the crime and the shooter or shooters as soon as we’re aware. We will continue working directly with New Zealand Police as their response and investigation continue.”

This is far from the first time that tech platforms have hosted violent and graphic content, sometimes from terrorists. Tech platforms have said they are doing their best to curtail violent content, and some tech giants have faced advertiser boycotts after ads ran before extremist content.

Facebook, Twitter, YouTube and Reddit all have community guidelines and policies that prohibit certain graphic and violent content from being uploaded to the platform. But having those policies in place doesn’t mean people will follow them, something that was evident on Friday as the companies fought to flag and take down content even as accounts uploaded new videos over and over.

Recommended articles