This summer, after looking into how advertising works on YouTube, ad tech research firm Adalytics released two reports in quick succession that struck a nerve with advertisers and media buyers.
This research resonated in part because brands want more transparency into and control over their digital media buys, and they often don’t feel like they’re getting it.
In its first report, Adalytics alleged YouTube charged premium prices for ads running on low-quality sites through its Google Video Partners program. The second report claimed YouTube allows personalized ads on “made for kids” (MFK) channels, which would be a violation of the Children’s Online Privacy Protection Act (COPPA).
Google refutes the findings and the methodology of both reports. Last week, it released a detailed rebuttal outlining how ad serving and reporting works on YouTube channels with MFK content.
According to Google, affinity-based ads served on MFK channels aren’t necessarily indicative of personalization, because channels can include a mix of MFK and non-MFK content.
And when ads run on MFK videos, Google says, the only targeting that happens is through contextual signals. It’s possible to create an “affinity audience” (e.g., boating enthusiasts or movie lovers) without relying on personal data by looking at “aggregated data and connections or trends” that Google sees for multiple types of content across YouTube.
Be that as it may, Adalytics tapped into a wellspring of frustration among media buyers, which Dan Taylor, VP of global ads at Google, acknowledges.
Advertisers and agencies “are anything but shy in terms of giving us feedback into the types of features, reporting and tools they want,” Taylor said, “and we’re committed to being transparent about our policies and protections.”
“This is an opportunity to help address any misunderstandings about how our protections work, and we’re welcoming feedback and actively exploring ways to improve the clarity of our policies and reporting tools to help avoid confusion,” he said. “Genuine inquiries about how our tools work are making us better – misinformed analyses that assume bad intent on our part are not.”
Taylor spoke with AdExchanger.
AdExchanger: Google says it uses aggregated data and connections and/or trends culled from different types of content on YouTube to target contextual ads on “made for kids” content. How does that work?
DAN TAYLOR: I’ll use the boat example. A contextually generated affinity label would have been derived from aggregated signals from related videos that suggest a “made for kids” video is contextually relevant. We can assign an affinity label to a video without having individual viewer data on that video.
Are there certain derived affinities that don’t make sense to even offer as an option for “made for kids” content? I don’t know many kids that are going to buy a boat or a motorcycle.
Even in cases where we’re not personalizing, we have restrictions on the types of ads that can run against content to people under 18.
But to your specific question as to why a boating enthusiast would be on a “made for kids” video, I think about my own experience. When I watch YouTube or television with my kids, we see ads that are relevant to me on content that is kid-oriented, like an automotive ad that might run during Saturday morning cartoons.
We recognize that there is probably co-viewing going on, and I don’t necessarily see that as problematic, especially when there’s no data being used to personalize.
But putting aside COPPA, Adalytics and the inner workings of YouTube’s ad tech, what would be the point of running an ad for Shopify, Verizon, Grammarly, a local Mazda dealership or a Paramount Plus show about special ops forces on a nursery rhyme video?
We’re not personalizing the advertising that takes place on “made for kids” videos, but that doesn’t necessarily mean the viewing taking place on those videos isn’t relevant to an ad we serve there. We do a lot of advertising in a non-personalized fashion across all of our different properties … but we still strive to serve relevant ads in the moment.
We use aggregated information and understanding of ads that have performed well in other places to surface ads in places where we feel it’s going to be relevant, even if we’re not using individual data to do so.
This is more of a philosophical question: When do the specifics of what’s happening in the background become a distinction without a difference? When the person on the other end of the screen – or that person’s parent, more likely – feels like an ad was served based on a personalized signal because the ad isn’t for a toy or something that would be more obviously relevant for a child.
What consumers want most is transparency into and control over the types of ads they’re seeing, which is why we’ve invested in things like My Ad Center. It’s why we allow consumers to turn off personalization entirely and why we have an ads transparency center where they can see all the ads being served by a given advertiser.
To go back for a moment to the Paramount Plus example, the Adalytics report cited an ad that ran on a CoComelon video for a show about special ops forces that included explosions. What would be relevant about that for a CoComelon audience?
I can’t speak to this specific example, because it wasn’t served to me, but I do want to reiterate that we’re not allowing personalized ads on “made for kids” content. We do use affinity categories on “made for kids” content based on contextual information and what we feel is appropriate, and we continue to iterate on those policies.
Can you share any specific examples of how you plan to give advertisers more transparency based on their feedback?
We’re looking at the ways advertisers understand how to use our tools.
Advertisers can easily opt out of showing ads on “made for kids” content altogether. We have a comprehensive help center page that breaks down content exclusions and how our audience targeting works, including how life events and affinity targeting intermixes with contextual or individual user-level data.
But we’re hearing through the course of this whole dialogue that there are some places where we can make things clearer, and that’s the type of thing we’re looking at.
Is there a way for an advertiser to guarantee it’s not running on any MFK content whatsoever, even contextually targeted ads?
Advertisers can opt out of serving ads on content suitable for families so ads will not serve on content we’ve identified as either “made for kids” on YouTube or as child-oriented content elsewhere in our product.
What about providing more transparency into Performance Max?
Performance Max is built to deliver advertiser outcomes using AI and machine learning. We’re iterating on the types of reporting that advertisers can see so they can understand how the AI is working to drive performance. We don’t have anything new to announce, but we continue to think about how to deliver the right offering to advertisers that want to see more or less in that regard.
If you had a bunch of advertisers in front of you right now, what would your message be to them?
I’d say three things.
Adalytics’ analysis did not indicate ads on “made for kids” content were personalized, and it doesn’t indicate a violation of our policies. It failed to account for the fact that YouTube channels, even those that were designated by the creator as being made for kids, can have a mix of “made for kids” and non-“made for kids” content. And, finally, Adalytics ignored the fact or didn’t understand that affinity segments can be created either from personalized viewer data or from contextual signals only.
Adalytics aside, how would you respond to the emotions that have been whipped up by these reports?
I’d say we welcome research into how our products work because it holds us and the industry accountable. And we appreciate the opportunity to address any misunderstandings.
This interview has been lightly edited and condensed.
For more articles featuring Dan Taylor, click here.