Walker wrote that Google and YouTube are taking a four-step approach to removing terrorist content from the video site:
- Increasing use of technology to help identify extremist and terrorism-related videos.
- “Greatly” increasing the number of independent experts in YouTube’s Trusted Flagger program.
- Taking a tougher stance on videos that may not clearly violate YouTube’s policies.
- Expanding YouTube’s role in counter-radicalization efforts.
Walker said of the efforts:
Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right. Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.
More details on the four steps announced by Walker follow from the blog post and op-ed:
First, we are increasing our use of technology to help identify extremist and terrorism-related videos. This can be challenging: A video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user. We have used video analysis models to find and assess more than 50 percent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.
Second, because technology alone is not a silver bullet, we will greatly increase the number of independent experts in YouTube’s Trusted Flagger program. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 percent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this program by adding 50 expert NGOs (non-governmental organizations) to the 63 organizations that are already part of the program, and we will support them with operational grants. This allows us to benefit from the expertise of specialized organizations working on issues like hate speech, self-harm and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalize and recruit extremists.
Third, we will be taking a tougher stance on videos that do not clearly violate our policies—for example, videos that contain inflammatory religious or supremacist content. In the future, these will appear behind an interstitial warning and they will not be monetized, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.
Finally, YouTube will expand its role in counter-radicalization efforts. Building on our successful Creators for Change program promoting YouTube voices against hate and radicalization, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate and watched over half a million minutes of video content that debunks terrorist recruiting messages.
Image courtesy of whitemay/iStock.