YouTube just can’t seem to catch a break lately. In the past year, the behemoth video platform has faced an unfortunate string of controversies.
First, the “Adpocalypse” saw major brands flee YouTube en masse when they discovered their ads on videos promoting hate speech, terrorism and violence.
Then, these and other brands found their ads being placed alongside inappropriate content aimed at children.
And let’s not forget the latest debacle featuring one of YouTube’s biggest stars, Logan Paul.
This steady cadence of controversies has most of us in the video world asking not if YouTube will have another fallout, but when the next fallout will occur. With scandal after scandal, YouTube is facing a crisis of trust, forcing us to wonder whether the video giant has grown too large to manage the expectations of brands on its platform.
To be fair, YouTube has made strides to try to address the concerns of its partners. Last December, the platform announced that it would add 10,000 employees to its workforce to manually review content in an effort to ensure brand safety.
Then, this past month, YouTube unveiled changes to its YouTube Partner Program emphasizing “watch time” over lifetime views, because more watch time purportedly means fewer “watch dogs.”
But whether these steps really promote brand safety has yet to be seen, and there are questions about how these new measures impact the YouTube community at large.
For instance, YouTube’s move to manually review videos fundamentally changes its relationship with the content on its platform. If people are reviewing videos, then YouTube will have knowledge of the content being uploaded, thus removing safe-harbor protections and limiting them from invoking the Digital Millennium Copyright Act (as YouTube did in its lawsuit with Viacom) to distance itself from copyright infringement.
Even YouTube’s changes to its Partner Program signal a shift in the platform’s relationship with its creators. While some see the focus on “watch time” as YouTube instituting quality control over content, many creators see the move as YouTube barring them from monetization.
These smaller creators are not the Logan Pauls and PewDiePies of the world—they’re not attention-hungry vloggers intent on capturing the next outrageous thing to get views. They are everyday videographers dedicated to capturing organic content, whether it’s a pregnant couple dancing or a dog whining to go for a swim.
This content is often harmless for brands and carries the potential for viral views. In fact, effective user-generated content can drive 6.8 times higher engagement than brand-generated content, according to Mary Meeker’s 2017 Internet Trends Report. YouTube certainly should make efforts to protect advertisers, but penalizing the small creator is not the solution.
So, what is the solution? And what can YouTube do to build a brand-safe environment and regain trust among its partners?
YouTube’s issues in brand safety stem from how it operates more than anything else, because safety protections are not in place at the point of upload. Take the Logan Paul incident, for instance. His insensitive video showing a recent suicide victim racked up more than 6 million views before it was flagged and taken down. In today’s day and age, YouTube needs to move faster to identify problematic content before an incident occurs.
Luckily, technology is being developed to allow it to do so. Artificial intelligence and machine learning enable video platforms to screen videos at the point of ingestion and determine whether there are issues with copyright and content.
Our company, Rumble, for instance, instituted a three-step process to ensure the quality and brand safety of videos right at the point of upload. The first step is AI video screening, followed by AI user analysis and then human quality control, which, in turn, teaches our technology with each iteration.
This three-step process, rooted in AI and machine learning, enables us to deliver a brand-safe environment at scale for our advertising partners. It also doesn’t discriminate against smaller creators, which is important, because our data has shown that the most popular and brand-safe videos often come from individual creators.
YouTube, however—with 300 hours of video uploaded to its platform every minute—may just be too large at this point to control the influx of content and to guarantee a brand-safe environment.
One thing is certain, though: Without a proactive solution at the point of upload, brand safety can never be truly assured. Advertisers would do well to ask their video partners how they clear their content: Retroactively or proactively? Because in the end, if you subject 12 million eyeballs to a horrific video before it gets taken down, it’s already too late.
Chris Pavlovski is CEO of video-management platform Rumble.