Following Russia’s invasion of Ukraine, the ad tech industry took steps to freeze Russian-owned media companies out of the advertising ecosystem.
But programmatic technology continues to be used by parties on both sides of the conflict as a platform to conduct psychological warfare.
The continued monetization of pro-Kremlin fake news sites that spread misinformation about the war inside and outside the region, as well as the propagation of clickbait ads that link to fundraising scams, propaganda and graphic content, all highlight vulnerabilities in ad tech’s largely ineffective response to the conflict.
The tide of propaganda
Even Google, with all of its engineering power, hasn’t been doing enough to stop serving ads on Russian-owned websites that have been sanctioned by the U.S. Treasury Department.
Although Google said it would no longer do business with these sites, it continued allowing them to monetize, according to research from campaign performance platform Adalytics.
One reason this has been difficult to clamp down on is the problem isn’t solved simply by demonetizing Russian state media outlets, such as RT and Sputnik.
There is a multitude of online entities that repurpose content published by these outlets in different regions and languages, said Or Levi, CEO and founder of AdVerif.ai, which uses machine learning to uncover sites that spread misinformation.
“There are hundreds of surrogates that are propagating content sourced from [Russian state media],” Levi said, “and advertisers are unwittingly sponsoring this content.”
In the majority of cases, these sites don’t adhere to standards for assessing legitimate journalism, like those promoted by the International Fact-Checking Network (IFCN).
Identifying and demonetizing these misinformation sites is therefore an almost insurmountable task, because they proliferate at a much faster rate than watchdogs can currently identify them, Levi said. Adverif.ai has been using standards from the IFCN and many other global fact-checkers to train its AI models in an attempt to create scalable solutions for identifying fake news.
Changing the narrative
The current media environment in Russia is dominated by state-sponsored news that promotes the Kremlin’s justification of the invasion as an attempt to liberate Ukraine from nationalist forces. (Outside of Russia, the invasion is widely seen as an attempt to reconsolidate former Soviet territories.)
In order to ensure that only the Russian state’s viewpoint is represented in the media, non-Russian news sites, social media platforms and search engines are heavily censored, if not outright banned.
In response, pro-Ukrainian entities are using ad tech as a backchannel to Russian audiences to counter the one-sided narrative pushed by the Kremlin and drum up opposition to the conflict within Russia’s borders.
MGID, for example, a native advertising platform with offices in the U.S. and Ukraine, has been using its technology to target messages to Russian mothers, imploring them not to allow their sons to be sacrificed for Russian President Vladimir Putin’s expansionist campaign in Ukraine, MGID CEO Sergii Denisenko told AdExchanger. MGID geotargets its messages to Russia and places them on Russian social media platforms, such as VK and Odnoklassniki.
Ads targeted to Russian mothers have been common throughout the conflict, including pro-Russian messages urging mothers not to believe international reports on Russian casualties, said Alisha Rosen, marketing and branding manager at ad security company GeoEdge.
UK-based sustainable advertising provider Good-Loop has also used ad tech as a point of entry into Russia’s one-sided media environment, said Amy Williams, the company’s CEO and founder.
“The opacity and complexity of ad tech means it’s one of the few windows into a very censored country,” Williams said. “Advertising technology is so hard to pin down, and that’s famously a big problem in our industry. But in this instance, it’s a strength, because it means you can get information into Russian communities that the Kremlin would never want them to have access to.”
Good-Loop is part of a campaign organized by digital marketing expert Rob Blackie that uses programmatic tech to serve Russian audiences with links to legitimate news coverage of the war, as well as links to download VPNs that can bypass Russia’s web restrictions on non-state news and international outlets, Williams said.
As of early March, these ads were seen roughly 2 million times and clicked by around 42,000 users, according to a report by Fast Company.
The campaign works in collaboration with members of the UK-based Institute of Practitioners in Advertising. Creative agencies in Ukraine make the content, and a volunteer network of ad ops people across the UK buy the inventory.
Opportunism and clickbait
But programmatic pipelines are also being used in more nefarious and opportunistic ways.
Some parties on both sides of the conflict, as well as bad actors from outside the region looking to exploit the war for profit, are taking advantage of vulnerabilities in the ad tech ecosystem to serve clickbait ads, Rosen said.
“We’re seeing an influx of salacious creatives that are engineered to cause panic and elicit clicks,” Rosen said. “These inflammatory campaigns are flooding programmatic channels and they’re going to malicious and explicit pages, scam sites and porn sites.”
Rosen shared some examples of these clickbait ads with AdExchanger, many of which linked to graphic imagery of dead or wounded soldiers and civilians. One ad featured a pro-Ukrainian message that read “No to Putin’s War! Only the people can stop this!” – but when clicked, the ad redirected to an explicit porn site that used malware to hijack the user’s back button, preventing them from exiting the page.
In many cases, these clickbait ads redirect users to encrypted Telegram, WhatsApp and Signal channels that share questionable audio, video and imagery purported to be sourced from combat zones in the region.
Another common scam is to target users in the U.S. and Europe with ads asking for cyptocurrency donations to support the Ukrainian defense effort that obviously have nothing to do with funding the Ukrainian military, Rosen said.
Ad industry responsibility
To help stem this tide of misleading ads, the global ad industry needs more standardized definitions of what types of ads constitute clickbait so they can be more easily blacklisted, Rosen said.
While the ad industry has taken reactive measures to punish Russia for its military aggression, it has not been proactive in preventing its technology from being exploited to push false narratives and sow division, Levi said.
For example, Russia has been using advertising and social media as vectors for social engineering for years, as documented during the 2016 U.S. presidential election and the UK’s Brexit referendum. These same dynamics played out in the leadup to the invasion of Ukraine, according to Levi.
“Alongside the war that’s happening on the ground, there’s been an information war that’s been going on for the last few years,” he said. “As part of efforts to combat misinformation, we have been working with an EU task force since 2018 to look into how these dynamics are evolving, and you can see in the last few months there has been a rapid escalation.”
But most of the big ad platforms did not explore measures to prevent monetizing this escalation, choosing instead to take action only after the invasion presented an opportunity to move against Russia.
“I don’t think Google, Facebook and The Trade Desk are sitting in a back room somewhere saying ‘Let’s monetize everything we can at all costs,’” said David Murnick, an advisory board member at the Brand Safety Institute. “They have put a lot of resources against mitigating risk to brands and not monetizing or supporting the bad and harmful content out there – [but] on the flip side, they also have the difficult task and responsibility to support free speech and not independently make a decision on what is right or wrong.”
This dynamic has led to an environment where more companies across the advertising ecosystem, in an attempt to be objective, are relying on third-party fact-checkers or artificial intelligence to identify fake news and misinformation, rather than trying to take on the responsibility internally, Murnick said.
But these blanket solutions tend to demonetize misinformation outlets and legitimate news sources equally, whereas the better approach would be to proactively support legitimate news, Good-Loop’s Williams said.
“Certain brand messages might not be appropriate,” she said. “But where advertisers can stand up for independent press and make sure brands are supporting and funding real journalism, that’s something proactive and positive our industry can do.”