By Howard Pinsky, Director, Creator Marketing at Fullscreen
Not suitable for all advertisers – 5 words creators dread seeing, and unfortunately, 5 words we’re now seeing more than ever before as a result of YouTube’s tighter stance on content which may not be “brand safe.” The #adpocalypse has taken the entire industry on a wild ride over the last few months, and though YouTube is keeping pretty quiet on how its new algorithm functions, we’ve taken the opportunity to do a bit of digging.
But first, the backstory
For those unfamiliar, the #adpocalypse started around March of this year after some of YouTube’s largest advertisers noticed their ads were being run on questionable videos, including those promoting terrorist activities. This sent a ripple throughout the industry, causing hundreds of brands to pull their spots indefinitely, and sent monetized playbacks plummeting.
Almost overnight, we saw a decrease in monetized playbacks, not only on individual channels, but across our entire network. This wasn’t an isolated event. This was platform-wide.
Over the last 6 months, YouTube has been tweaking the tools it uses internally to determine how brand safe any given video is, but the exact workings of its algorithm is still a mystery.
Cracking the algorithm
Since day one, many have speculated how and why a video could become demonetized. Internally, we ran a few small case studies and conjured up some interesting theories, but we knew we needed quite a bit more data in order to be confident about the advice we provided to clients. Luckily, we had the ideal database sitting right under our noses.
The Fullscreen Creator network is home to thousands of unique talent who are producing content that spans across hundreds of verticals – some of which is considered brand-safe, and some not so much. Having a wide variety of content gives our Data Scientists tens of thousands of videos to analyze in order to better understand YouTube’s mysterious algorithm. Here’s what we found.
The results aren’t all shocking
Let’s start with what we all suspected. Swearing, crude language, violence, and sexual terms will likely land you in monetization prison.
One of the simplest ways to detect potentially unsafe content is to scan the metadata of a video – title, tags, and descriptions. If any or all of those contain blacklisted keywords (which we’re certain there are a lot of), there’s a good chance you’re going to get dinged.
Google is listening
YouTube’s mysterious system doesn’t stop at what can be read; it also dives deep into your captions, which are auto-generated after you upload. This is where things start to get interesting.
In the videos we analyzed, especially those with no incriminating evidence in their metadata (title, description, tags), we found that captions can play a big part in determining whether a video is brand safe or not. Even more interesting, the number of times a keyword is mentioned appears to contribute.
In the graph above, we looked at the recovery states across 3 buckets of videos where unsafe keywords were never, occasionally, or frequently mentioned. The dip across all 3 buckets represents the #adpocalypse month (March 2017) where monetized playbacks were most affected.
The recovery is where things get interesting. Videos that don’t contain any unsafe keywords tend to recover faster, while those that do, especially frequently, take the hardest hit.
The odds of recovery
Three buckets were introduced to help determine the odds that a channel would recover – ‘“recovered,” ‘not quite,” and “probably never.” Our data scientists discovered that a selective group of unsafe keywords have a great impact on a channel’s monetized playbacks, which directly impacts earnings.
The graph above presents the odds that a channel falls into the “not quite” or “probably never” bucket when compared to the benchmark of “recovered.” If an f-bomb (or mf-bomb) is mentioned, we saw a 2.2x higher chance that a channel would “probably never” recover.
Any mention of shoot, kill, or fight reveals an almost equal odds of being in either the “not quite” or “probably never” bucket. In addition to “real-world” videos, we found that gameplay content was getting hit when keywords like these were present.
“Sex,” and similar keywords, is perhaps the most interesting in this study. While these keywords make it quite difficult for a channel to recover, those who do make changes will find it less painful than those who drop frequent f-bombs.
So what does this actually mean? At the current time, channels that frequently contain swear words and/or discuss drugs, violence, or sexual subjects will have a difficult time recovering its earnings unless YouTube alters its algorithm.
Google is likely watching
This is where the tinfoil hats start to creep out. While Google hasn’t confirmed this, there’s strong evidence that their Vision API is being used to detect objects and text within videos in order to determine its revenue status. In the scans that we ran on videos where no risky keywords appear in either the metadata or the captions, we found that videos that contained graphic images such as blood or explosions were being demonetized.
Seeing that the Vision API is fairly new (and not perfectly accurate) technology, it’s likely that it’s only being used to catch extreme visuals at this time, but could expand as it develops.
The system needs [a lot of] work…
When the #adpocalypse first kicked in, there’s a good chance that YouTube turned up the brand safety dial as far as it would go in order to appease its advertisers, which demonetized many clean videos in the process. Even to this day, we’re seeing some safe content become demonetized simply because a keyword with multiple meanings exists. Some Minecraft videos, for example, are being affected due to the word “Creeper.”
Responding to backlash, YouTube claims that its system is continuously being trained, not only manually, but through machine learning, as well. If you find yourself with a demonetized video, you’re currently able to request a manual review once it passes 1,000 views.
The hard truth
Under YouTube’s new guidelines, if you want to earn ad revenue from your videos going forward, they must be brand safe. This means no swearing, graphic imagery, overly sexual references, or anything else that may deter advertisers. Yes, it’s a tough message to hear, especially if you built your channel around edgy content, but this seems to be the direction YouTube is taking its platform.
It’s also good to note that in many cases, ads aren’t completely restricted from demonetized videos. Instead, they’re placed into a ‘risky’ category, which requires advertisers to manually opt-in to.
As early as just a few years ago, YouTube was more or less the only destination for creators who were looking to bank off their videos. That’s very quickly changing. While some platforms, like Facebook, are still working out their monetization programs, many are available right now to help decrease your dependence on YouTube. Twitch, Patreon, Famebit, and Influencer Plus are up at the top of the list for eligible creators who are looking to expand their businesses.
It’s not too late
Over the last few months, we’ve begun working with many of our creators who have been hit pretty hard by the recent changes. When it comes to past videos, little can be done if the content is in fact unsafe for brands, but we’ve started to see channels bounce back by following very strict strategies that we’ve put into place for new videos.
*Disclaimer: Until YouTube outlines exactly how their algorithm functions, much of the discussed information is purely speculation – no matter how solid our data may be.