"On TV And Video" is a column exploring opportunities and challenges in advanced TV and video.
Today’s column is written by Dominique Netto, head of client services at [m]PLATFORM - GroupM.
After the brand-safety sagas of 2017, Alphabet appears to be using some of that $27.7 billion Q3 revenue to further develop brand safety across YouTube.
There is no hiding behind the user-generated content defense, which is essentially like building a school for children without employing any teachers to ensure no wrongdoing.
I’m glad to see it making some serious updates, but is this a ploy for more advertising dollars or a genuine push to make the platform a safer hub?
The most effective approach to blocking content at this stage is a combination of machine learning and manual vetting. Despite YouTube’s announcement in August that AI can be better than humans at detecting extremist content, it is now releasing 100% human-verified videos across the Google Preferred ad product in a push to get more advertisers to buy premium inventory.
Google Preferred is one of the biggest revenue drivers for YouTube, and when 250 advertisers left during last year’s boycott, it needed to save its reputation. But Google Preferred ads only account for the top 5% of YouTube content, making it a very small safe pool.
Reports indicate the company will hire 10,000 people by the end of 2018 to identify and pre-empt violative content. Considering it will take 10,000 people to vet Google Preferred, which only represents 5% of YouTube content, it would probably need closer to 200,000 people to manually vet videos across the entire platform.
I would have liked to see YouTube also prioritize manual vetting across more sensitive categories, especially child safety. Prioritizing brand safety on its premium ad product gives the appearance that it is focusing on its biggest profit opportunity. What about the other 95% of content not being 100% verified?
After its reluctance to integrate with any third parties, YouTube is now opening up its partnerships list. The platform is already integrated with Moat and OpenSlate and is working through IAS and DoubleVerify partnerships.
You can evaluate risk after the ad has already appeared and run across curated channels only, but no current integrations allow brands to block an impression before it is served next to offensive content.
I look forward to other developments in this space, especially with image recognition and ad blocking. Until we can successfully block an ad from showing across violative content, brands are still open to risk.
The requirements for the YouTube Partner program have changed from 10,000 views at a channel level to 4,000 hours of watch time within the past 12 months and 1,000 subscribers. Monitoring audience engagement, community strikes and spam have also been introduced to add another layer of security. This has reduced the number of YouTube partners by cutting out all the smaller players, but will it also cut out the risky players?
YouTube will probably spend less on resources needed to monitor the decreasing number of partners, but I don’t think the changes are drastic enough to make a real difference in terms of brand safety. Fraudsters will adapt to new regulations and create new ways of generating bot views and subscribers. I predict YouTube's next move will be to increase the price of Preferred inventory because it's even more premium now.
It took a slew of bad news stories and advertiser losses for YouTube to start making brand-safety changes. The recent updates are aimed at improving brand safety on its premium ad products, which generate the most revenue. But I think there is a lack of focus on brand safety across most of the content on the platform. As a platform creator, YouTube must perform due diligence to be a guardian of the content it’s hosting.