Australia’s New Laws on Livestreaming Violent Content Pack a Punch for Social Platforms

Failure to ‘expeditiously’ remove it could result in heavy fines or jail time

Australia’s eSafety commissioner will have the power to notify social platforms about violent content. Getty Images

The Australian government reacted swiftly and forcefully to the livestreaming and resulting video sharing of the terrorist attacks in Christchurch, New Zealand, passing new laws that turn up the heat on social networks to assume more control over the spread of violent content via their platforms.

Australia’s attorney general, Christian Porter, and its communications and arts minister, Mitch Fifield, issued a statement detailing Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill 2019, which was passed by the country’s House of Representatives Thursday after the Senate did so Wednesday night.

The new law makes failure by social media platforms to “expeditiously” remove “abhorrent” violent material a criminal offense punishable by up to three years in prison for the executives deemed responsible or fines of up to 10 percent of the company’s annual revenue in the country. It also states that failure to notify the Australian Federal Police that violent conduct is being livestreamed via their platforms is punishable by fines of up to 168,000 Australian dollars (about $119,500) for individuals or 840,000 Australian dollars ($597,600) for corporations.

Facebook would not comment directly, instead pointing to a statement from Australian nonprofit Digital Industry Group Inc., which counts Facebook, Twitter and Google among its members: “With the vast volumes of content uploaded to the internet every second, this is a highly complex problem that requires discussion with the technology industry, legal experts, the media and civil society to get the solution right. That didn’t happen this week.”

Australian nonprofit Digital Rights Watch didn’t appreciate the government’s quick reaction, either. The organization called it “hastily drafted” and lamented the lack of public debate and “rushed and secretive approach.”

“The reality here is that there is no easy way to stop people from uploading or sharing links to videos of harmful content,” Digital Rights Watch chair Tim Singleton Norton said in a statement. “No magic algorithm exists that can distinguish a violent massacre from videos of police brutality. The draft legislation creates a great deal of uncertainty that can only be dealt with by introducing measures that may harm important documentation of hateful conduct.”

Porter and Fifield said Australia’s eSafety commissioner will have the power to notify social platforms about violent content, and those platforms will be on the clock as soon as they receive those notices.

“It was clear from our discussions last week with social media companies, particularly Facebook, that there was no recognition of the need for them to act urgently to protect their own users from the horror of the livestreaming of the Christchurch massacre and other violent crimes, and so the Morrison government [Scott Morrison is the current prime minister of Australia] has taken action with this legislation,” Porter said in a statement.

Google pushed for public consultation on the issue, saying it has thousands of people worldwide reviewing content and abuse of its platforms, and on Christchurch specifically, tens of thousands of videos were removed and hundreds of accounts were terminated for promoting or glorifying the shooter.

A Google spokesperson said, “We have zero tolerance for terrorist content on our platforms. … We are committed to leading the way in developing new technologies and standards for identifying and removing terrorist content. We are working with government agencies, law enforcement and across industry, including as a founding member of the Global Internet Forum to Counter Terrorism, to keep this type of content off our platforms. We will continue to engage on this crucial issue.”

Twitter did not respond to a request for comment.

Facebook CEO Mark Zuckerberg penned an op-ed in The Washington Post a week ago in which he sought “a more active role for governments and regulators” regarding standards for harmful content, but the new laws in Australia move in the opposite direction, placing the burden squarely on the shoulders of social platforms.

In the U.S., social platforms and other interactive computer services are protected under Section 230 of the Communications Decency Act, which states that they cannot be held legally responsible for speech and content shared via their platforms.

Section 230 came under attack earlier this week for reasons unrelated to Christchurch.

Sen. Josh Hawley, R-Mo., sent a letter to Twitter CEO Jack Dorsey pushing for him to conduct a review of his platform’s policies on suspending accounts after the account for Unplanned, an anti-abortion film, was suspended during its opening weekend.

“The decision to suspend the account for a pro-life movie on opening weekend is too much to be a coincidence,” Hawley wrote. “I am rapidly losing confidence that Twitter is committed to the free speech principles that justify immunity under section 230.” David Cohen is editor of Adweek's Social Pro Daily.