The duopoly isn’t a duo when it comes to brand safety.
The nuances of how their respective platforms operate mean that Facebook and Google face their own particular challenges to ensure brand safety.
While the primary risk on YouTube is adjacency – ads appearing within or beside unsavory or questionable content – the personal nature of Facebook’s news feed is a different beast altogether.
That’s News Feed To Me
YouTube is an open service, more like broadcast than anything else. As YouTube CEO Susan Wojcicki put it in May at Google’s I/O 2017 developer conference: “Anyone in the world can upload a video. Anyone can watch.”
And anyone – including the press – can take a screengrab of a reputable brand’s ad served up as pre-roll on a jihadi recruitment video, for example, or a rape apologist’s diatribe.
“Journalists are scouring the web for inappropriate content and the brands associated with it,” said John Montgomery, GroupM’s global EVP of brand safety. “Even if it’s only one impression that creeps through, if it’s captured, it can end up on the front page of The Times of London or the Guardian.”
But everyone’s Facebook news feed is different.
A neo-Nazi who just had a baby might see an ad for diapers in between racist posts, but the ripple effect of that ad appearing next to hate speech doesn’t go beyond his or her news feed.
There’s always the potential that something unpleasant posted by that person goes viral beyond the limited reach of their network, but unlike contextually targeted ads on a webpage or TrueView on YouTube, Facebook ads are limited to an individual’s algorithmically generated feed.
That might seem safer on its face, but it also makes Facebook more difficult to monitor than Google or the open web.
“Facebook is a closed environment, so it’s harder for brands to spot brand safety problems because they can’t see all of the slots where Facebook fits their advertising in,” said John Snyder, CEO of contextual targeting platform Grapeshot.
Platform Turn-Off
When buyers lack visibility, they also lose the stomach to spend.
Most advertisers are fine buying on Facebook proper, despite a growing desire for third-party brand safety measurement among agencies and advertisers, including GroupM, Procter & Gamble and Unilever.
Although advertisers aren’t allowed to place third-party brand safety tags on Facebook, the general consensus is that a walled garden is a relatively secure place to buy media, albeit frustrating from a data-sharing perspective.
“The walled gardens are safer – and this is coming from someone who makes their living outside the walled gardens – because they control the environment,” said James Green, CEO of retargeting platform Magnetic. “The content isn’t necessarily safer, but the likelihood your ad will be seen by a real person is far higher on Facebook than off.”
But that’s where Facebook has an issue – off of its platform.
Audience Network, Facebook’s off-platform advertising play, is still too opaque for the likes of GroupM, which continues to recommend that its agencies opt out of running ads on Audience Network.
As a concession, Facebook rolled out a new ad control tool last week that gives advertisers more pre-campaign insight into where their ads are going to run before they make a buy on Audience Network.
While that’s a step in the right direction and helps with planning, it’s not enough to get GroupM buying on Audience Network. Facebook still isn’t providing post-campaign, domain-level delivery reports to show advertisers where their ads actually ran.
“We very well might find out that a lot of the inventory is above board with nonnegative adjacencies,” said Kieley Taylor, GroupM’s head of paid social. “But in the absence of allowing us to see where we are, we lack comfort in going off network.”
Facebook’s “it’s cool, don’t worry, you’re safe, we promise” off-platform approach just doesn’t fly with GroupM.
“We don’t have the appetite for Facebook’s on-your-honor system, and clients continue to raise their voice in the choir as well,” Taylor said. “It can’t just be ‘trust us;’ it has to be third-party-verified. There’s no lack of available inventory on the open market, and we can hold it to a higher standard than what Facebook is offering with this iteration of Audience Network, so we’re choosing not to go through that doorway right now.”
Audience Network actually provides less transparency than buyers often get on the open web, where third-party measurement is welcome.
“On Audience Network, Facebook doesn’t have control or visibility into the content, which is an area we do have visibility into,” said Scott Knoll, CEO of Integral Ad Science. “By this point, we’ve indexed every page on the web with ads outside of the walled gardens and we have a good understanding of the relative risk.”
The lack of domain-level reporting isn’t a technical limitation, Taylor said, but rather something Facebook appears to be choosing not to do.
“We continue to agree to disagree, although we’re having an ongoing conversation,” she said. “We’re just asking for an apple to compare to an apple, but right now, we’re getting a pineapple.”
Facebook’s Fixes
In addition to transparency issues off platform, there are several bad apples with potential to spoil the bunch on Facebook’s owned-and-operated site and app – namely fake news and Facebook Live, which has become an unwilling home to fatal shootings, sexual assaults and self-immolation.
Facebook is combatting the spread of hoax news on its platform in several ways. It works with third-party fact-checking organizations, such as Snopes and Poynter, to label fake stories; it de-prioritizes links to low-quality stories in the news feed and blocks junky sites from using Facebook’s advertising tools to monetize; and it is more forthcoming about how its handles and thinks about sensitive subjects, including terrorism, online propaganda and its role in the distribution of hoax stories.
In May, Mark Zuckerberg announced plans to hire 3,000 additional human moderators for its community ops and content safety teams to monitor and remove harmful, violent and inappropriate content. Facebook is also investing heavily in artificial intelligence to help root out extremist posts.
In a blog post on Thursday, Elliot Schrage, Facebook’s VP of global comms, marketing and public policy, acknowledged that it’s time for an open debate on “complex subjects.”
“We take seriously our responsibility – and accountability – for our impact and influence,” Schrage wrote.
Facebook is “taking this issue very seriously,” said GroupM’s Montgomery, but it has its work cut out on the fake news front.
“The fact is that fake news is a tough thing to detect, more so than hate speech,” Montgomery said. “In many ways, hate speech is worse, but at least there are specific words you can identify and avoid, whereas fake news can be comprised of perfectly acceptable words, even though it’s a socially and politically misleading lie that undermines the objective of the free press.”
And just knowing that there’s misinformation and dubious content being distributed gives some brands an overall uneasy feeling about Facebook.
“It’s not just about adjacency,” Montgomery said. “There are some consumers that are going to feel uncomfortable about supporting brands that advertise on a platform that they perceive allows inappropriate behavior, fake news or hate speech. No action has been taken by brands we talk to because of that, but it does come up in discussions we have with them.”
The fact is, brands have the power to push the platforms for more control and greater transparency.
A slew of high-profile advertisers boycotted YouTube after revelations that its targeting algorithm was placing ads adjacent to controversial content. Google introduced more sophisticated content filtering mechanisms, and now most of the advertisers that halted their YouTube spend over safety issues are back advertising on the platform.
On Sunday, Google laid out further plans to intensify its fight against extremist content online by hiring more people and recruiting more independent experts to identify offensive videos; by more clearly labeling – and not monetizing – problematic content even if it doesn’t clearly violate YouTube’s policies; and by devoting more engineering talent to the cause.
It’s incumbent on the buy side to push change through, said Grapeshot’s Snyder.
“The brand safety issue is shining a spotlight on the need for more confidence about what’s happening overall and about where ads are being placed, not just clicks and conversions,” Snyder said. “It’s too easy to say that Facebook has scale, so it must be good. The people spending the money need to demand safety and transparency, and they need to demand a better sense of how their dollars are working.”