Home Platforms Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

Russia’s Invasion Of Ukraine Highlights Big Tech’s Struggle To Moderate Content At Scale

SHARE:
The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.
Bachevsk. Ukraine. October 2021: Control sign at the entrance to the Ukrainian checkpoint from Russia. Text translation: Ukraine

All of the large social platforms have content moderation policies.

No belly fat ads, no ads that discriminate based on race, color or sexual orientation, no ads that include claims debunked by third-party fact-checkers – no ads that exploit crises or controversial political issues.

No graphic content or glorification of violence, no doxxing, no threats, no child sexual exploitation, nothing that promotes terrorism or violent extremism. And on and on.

The policies sound good on paper. But policies are tested in practice.

The ongoing Russian invasion of Ukraine is yet another example that content moderation will never be perfect.

Then again, that’s not a reason to let perfect get in the way of good.

For now, the platforms are mainly being reactive – and, one could argue, moving slower and with more caution than the evolving situation on the ground calls for.

For example, Meta and Twitter (on Friday) and YouTube (on Saturday) made moves to prohibit Russian state media outlets, like RT and Sputnik, from running ads or monetizing accounts. But it took the better part of a week for Meta and TikTok to block online access to their channels in Europe, and only after pressure from European officials. Those blocks don’t apply globally.

As The New York Times put it: “Platforms have turned into major battlefields for a parallel information war” at the same time “their data and services have become vital links in the conflict.”

When it comes to content moderation, the crisis in Ukraine is a decisive flashpoint, but the challenge isn’t new.

We asked media buyers, academics and ad industry executives: Is it possible for the big ad platforms to have all-encompassing content and ad policies that handle the bulk of situations, or are they destined to be roiled by every major news event?

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

  • Joshua Lowcock, chief digital & global brand safety officer, UM
  • Ruben Schreurs, global chief product officer, Ebiquity
  • Kieley Taylor, global head of partnerships & managing partner, GroupM
  • Chris Vargo, CEO & founder, Socialcontext

Joshua Lowcock, chief digital & global brand safety officer, UM

The major platforms are frequently caught flatfooted because it appears they spend insufficient time planning for worse-case outcomes and are ill-equipped to act rapidly when the moment arrives. Whether this is a leadership failure, groupthink or lack of diversity in leadership is up for debate.

At the heart of the challenge is that most platforms misappropriate the concept of “free speech.”

Leaders at the major platforms should read Austrian philosopher Karl Popper and his work, “Open Society and Its Enemies,” to understand the intolerance paradox. We must be intolerant of intolerance. The Russian invasion of Ukraine is a case in point.

Russian leadership has frequently shown it won’t tolerate a free press, open elections or protests – yet platforms still give Russian state-owned propaganda free reign. If platforms took the time to understand Popper, took off their rose-colored glasses and did scenario planning, maybe they’d be better prepared for future challenges.

Ruben Schreurs, global chief product officer, Ebiquity

In moments like these, it’s painfully clear just how much power and impact the big platforms have in this world. While I appreciate the need for nuance, I can’t understand why disinformation-fueled propaganda networks like RT and Sputnik are still allowed to distribute their content through large US platforms.

Sure, “demonetizing” the content by blocking ads is a good step (and one wonders why this happens only now), but such blatantly dishonest and harmful content should be blocked altogether – globally, not just in the EU.

We will continue supporting and collaborating with organizations like the Global Disinformation Index, the Check My Ads Institute and others to make sure that we, together with our clients and partners, can lead to help deliver structural change. To not just support Ukraine during the current invasion by Russia, but to ensure ad-funded media and platforms are structurally unavailable to reprehensible regimes and organizations.

Kieley Taylor, global head of partnerships & managing partner, GroupM

Given the access these platforms provide for user-generated and user-uploaded content, there will always be a need to actively monitor and moderate content with “all-hands-on-deck” in moments of acute crisis. That said, progress has been made by the platforms both individually and in aggregate.

Individually, platforms have taken action to remove coordinated inauthentic activity as well as forums, groups and users that don’t meet their community standards.

In aggregate, the Global Internet Forum to Counter Terrorism is one example of an entity that shares intelligence and hashes terror-related content to expedite removal. The Global Alliance for Responsible Media (GARM), created by the World Federation of Advertisers, is another example.

GARM has helped the industry create and adhere to consistent definitions – and a methodology to measure harm – across respective platforms. You can’t manage what you do not measure. With deeper focus through ongoing community standard enforcement reports, playbooks have been developed to lessen the spread of egregious content, including removing it from proactive recommendations and searches, bolstering native language interpretations and relying on external fact-checkers.

There will be more lessons to learn from each crisis, but the infrastructure to take more swift and decisive action is in place and being refined, with the amount of work still to do based on the scale of the platform and the community of users it hosts.

Chris Vargo, CEO & founder, Socialcontext

Content moderation, whether it’s social media posts, news or ads, has always been a whack-a-mole problem. However, the difference between social media platforms and ad platforms is in codifying, operationalizing and contextualizing definitions for what is allowed on their platforms.

Twitter, for instance, has bolstered its health and safety teams, and, as a result, we have an expanded and clearer set of behaviors with definitions of what is not allowed on the platform. Twitter and Facebook both regularly report on infractions they find, and this further builds an understanding as to what those platforms do not tolerate. Today, it was Facebook saying they would not enable astroturfing and misinformation in Ukraine by Russia and its allies.

But ad tech vendors themselves haven’t been pushed enough to come up with their own definitions, so they fall back on GARM, a set of broad content categories with little to no definitions. GARM does not act as a watchdog. It does not report on newsworthy infractions. Ad tech vendors feel no obligation to highlight the GARM-related infractions they find.

It’s possible to build an ad tech ecosystem that has universal content policies, but this would require ad tech platforms to communicate with the public, to define concretely what content is allowed on its platform – and to report real examples of infractions they find.

Answers have been lightly edited and condensed.

Must Read

Inside The Fall Of Oracle’s Advertising Business

By now, the industry is well aware that Oracle, once the most prominent advertising data seller in market, will shut down its advertising division. What’s behind the ignominious end of Oracle Advertising?

Forget about asking for permission to collect cookies. Google will have to ask for permission to not collect them.

Criteo: The Privacy Sandbox Is NOT Ready Yet, But Could Be If Google Makes Certain Changes Soon

If Google were to shut off third-party cookies today and implement the current version of the Privacy Sandbox, publishers would see their ad revenue on Chrome tank by around 60% on average.

Platforms Are Autogenerating Creative – And It’s Going To Be Terrible

This week, we’re diving into the most important thing in advertising – the actual creative – and how major ad platforms are well on their way to an era of creative innovation. Actually, strike that. I meant creative desolation.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters
Comic: TFW Disney+ Goes AVOD

Disney Expands Its Audience Graph And Clean Room Tech Beyond The US

Disney expands its audience graph and clean room tech to Latin America, marking the first time it will be available outside the US. The announcement precedes this week’s launch of Disney+ with ads in Latin America.

Advertible Makes Its Case To SSPs For Running Native Channel Extensions

Companies like TripleLift that created the programmatic native category are now in their awkward tween years. Cue Advertible, a “native-as-a-service” programmatic vendor, as put by co-founder and CEO Tom Anderson.

Mozilla acquires Anonym

Mozilla Acquires Anonym, A Privacy Tech Startup Founded By Two Top Former Meta Execs

Two years after leaving Meta to launch their own privacy-focused ad measurement startup in 2022, Graham Mudd and Brad Smallwood have sold their company to Mozilla.