Home AI Crafting A Conscience For Generative AI In Marketing

Crafting A Conscience For Generative AI In Marketing

SHARE:

When a generative AI tool spouts misinformation, breaks copyright law or perpetuates hateful stereotypes, it’s the people using the technology who take the fall.

After all, a large language model (LLM) generating text or an image “doesn’t use a brain of its own” or understand the implications of what it’s generating, said Paul Pallath, VP of applied AI at Searce, a cloud consulting company founded in 2004 that provides AI services such as assessing AI “maturity,” or readiness, and identifying use cases.

“We are far away from machines doing everything for us,” said Pallath, who held executive data science and analytics roles at SAP, Intuit, Vodafone and Levi Strauss & Company before joining Searce last year. (He’s also got a PhD in machine learning.)

Humans can’t outsource their ethical conundrums to algorithms and programs. Instead, we must “ground ourselves in empathy,” Pallath said, and develop responsible machine learning practices and generative AI applications.

Searce, for example, works with clients to move beyond the abstract. It guides companies through generative AI implementations and helps them establish frameworks for ethical, responsible AI use.

Pallath spoke with AdExchanger about a few hypothetical – but very possible – ethical scenarios a marketer might face.

If a generative AI tool produces factually inaccurate or misleading information, what should a marketer do?

PAUL PALLATH: Understand, verify and fill in the gaps of everything that’s coming out. There will be a lot of content that LLMs create that feels like truth but isn’t. Don’t assume anything. Fact-checking is very important.

What if I’m unsure if an LLM has trained on copyrighted material?

Avoid using it unless you have the rights and explicit permission from the author who has the copyright, because it creates significant exposure for your company.

Subscribe

AdExchanger Daily

Get our editors’ roundup delivered to your inbox every weekday.

The LLM should also spit out the references from which that content has been generated. It’s necessary to check every reference. Go back and read the original content. I’ve seen LLMs create a reference, and the reference doesn’t exist. It just cooked up information.

Say a marketer’s looking for ad imagery, and an LLM keeps returning lighter-skinned people. How can they steer it away from harmfully reinforcing and amplifying biases?

It’s about how you design your prompts. You need to have governance in terms of prompt engineering – a review of the different types of prompts you should be using, typically – so the content coming out isn’t biased.

If you have a repository of approved images, the LLM could create different surroundings, change the color, the clothes or the brightness, or make the image a high-resolution digital image.

For retail companies, if they have permission to use a person’s image, they can fit different apparel on top [of existing images] so it can be part of their marketing messages. They can have brand-approved ambassadors who don’t have to come in for several hours of photo and video shoots.

Should companies pay these brand-approved ambassadors for AI-generated variations of their images?

Yes. You’d compensate for every digital artifact you create with different models. Companies will start to work on different compensation mechanics.

LLMs train on what’s online, so they often favor “standard” forms of dominant languages, such as English. How can marketers mitigate language bias?

LLMs are maturing from a translation standpoint, but there are even variations of the same language. Which region the content is coming from, who has vetted the content, whether it’s true from a cultural standpoint, whether it stands by the belief system of that country – that’s not knowledge the LLMs have.

You need a human in the loop doing a rigorous review of the content that’s getting generated before it’s published. Have cultural ambassadors within your company who will understand the nuances of a culture and how it will resonate.

Is generative AI morally dubious from a sustainability perspective, given the power consumption involved in running LLMs?

A significant amount of computing power goes into training those models.

The metrics that large companies are chasing to become carbon neutral in the next five to 10 years are fundamental to which vendors they choose, so they’re not contributing toward their carbon emissions. They have to look at the energy their data centers use when they make those choices.

How can we prevent exploitation, such as using prisoners or very poorly paid workers to train LLMs, and other bad behaviors by LLM makers?

You have to have data governance and data lineage – in terms of who created the data, who touched the data, even before the data actually lands in the algorithms – and [a log of] the decisions that have been made [along the way]. Data lineage gives you transparency and allows you to audit the algorithms.

Today, that auditability is not there.

Transparency is necessary for us to weed out the nonethical elements. But we are dependent upon the large companies that have created these models to come out with the transparency metrics.

This interview has been edited and condensed.

For more articles featuring Paul Pallath, click here.

Must Read

Readers Are Flocking To Political News, Says WaPo – And Advertisers Are Missing Out

During certain periods this year, advertisers blocked more than 40% of The Washington Post’s inventory over brand safety concerns.

Monopoly Man looks on at the DOJ vs. Google ad tech antitrust trial (comic).

Spicy Quotes You’ll Be Quoting From The Google Ad Tech Antitrust Trial

A lot has already been said and cited during the Google ad tech antitrust trial, with more to come. Here are a few of the most notable quotables from the first two weeks.

The FTC's latest staff report has strong message for social media and streaming video platforms: Stop engaging in the "vast surveillance" of consumers.

FTC Denounces Social Media And Video Streaming Platforms For ‘Privacy-Invasive’ Data Practices

The FTC’s latest staff report has strong message for social media and streaming video platforms: Stop engaging in the “vast surveillance” of consumers.

Privacy! Commerce! Connected TV! Read all about it. Subscribe to AdExchanger Newsletters

Publishers Feel Seen At The Google Ad Tech Antitrust Trial

Publishers were encouraged to see the DOJ highlight Google’s stranglehold on the ad server market and its attempts to weaken header bidding.

Albert Thompson, Managing Director, Digital at Walton Isaacson

To Cure What Ails Digital Advertising, Marketers And Publishers Must Get Back To Basics

Albert Thompson, a buy-side veteran with 20+ years of experience, weighs in on attention metrics, the value of MFA sites, brand safety backlash and how publishers can improve their inventory.

A comic depiction of Google's ad machine sucking money out of a publisher.

DOJ vs. Google, Day Five Rewind: Prebid Reality Check, Unfair Rev Share And Jedi Blue (Sorta)

Someone will eventually need to make a Netflix-style documentary about the Google ad tech antitrust trial happening in Virginia. (And can we call it “You’ve Been Ad Served?”)