The ad industry is all in on generative AI, a buzzword that makes data clean rooms or blockchain look quaint.
But for the most part, that interest is reserved for the, well, generative aspect – ChatGPT to replace film and video production, say, or to fill in ad copy.
What’s missing right now is the analytics to understand what these generative AI query-masters are saying about your products.
At least that’s the idea behind Evertune AI, a new generative AI analytics startup co-founded by Brian Stempeck, former strategy chief at The Trade Desk and CEO of the company, and two other TTD alums, Poul Costinsky and Ed Chater, both of whom joined TTD by its acquisition of the cross-device graph Adbrain.
The idea started with the group doing their own product research on various large language models (LLMs), such as Perplexity, Llama, ChatGPT and Gemini, Stempeck told AdExchanger. It was surprising to see the variety of responses, even to the same queries, he said, and hard to disentangle why the LLMs suggest certain products or describe them in different ways.
Evertune, which announced a full commercial launch this month, is also backed by enough current and former TTD top brass for a full game of pickup basketball. Roger Ehrenberg (former director and seed investor), Rob Perdue (former COO), Susan Vobejda (former CMO) and Vivian Yang (former chief legal officer) are among the notables.
And then there are general ad tech angel investors, including Jonah Goodhart (who sold Moat to Oracle), Jay Friedman (CEO of Goodway Group) and Tal Chalozin (co-founder and former CTO of Innovid).
How it works
Evertune provides a kind of market research report for advertisers and agencies called the AI Brand Index.
They can look at the report to understand which brands are recommended for certain valuable queries across the LLMs. Evertune might ask a few variations of questions like “What TV should I buy?” and “What TV is best for video games?” thousands of times. If Perplexity is recommending Samsung for 80% for those queries, it has a score of 80 on that platform.
But generative AI search results are not like Google Search rankings. Traditional search rankings might be personalized to the user, for one thing, whereas the LLMs give indiscriminate answers, Stempeck said. Also, there is not a fixed index for generative AI results, compared to Google Search rankings. The same Google Search query given 1,000 times would generate the same Google response almost 1,000 times. Not so with Gemini.
Evertune’s AI Brand Index and other basic out-of-the-box query reports are included for customers. The idea, though, is that brands will be enticed to run far more of their own analytics, which Evertune charges for based on usage. “Inevitably, every brand has its own use cases, and they want to test different things, so we have abilities for them to run custom surveys and reports,” Stempeck said.
Next steps
Say a brand is unhappy with its score on one or more mainstream LLMs. What can they do about it?
“What we’ve heard from some brands is a desire to have a pipeline, or a way to educate the AI models directly with detailed information about their product or about their brand,” Stempeck said.
But right now, those people or teams don’t exist, he added, and marketers don’t really expect a vendor like Evertune to be facilitating communications with LLMs like Anthropic or Perplexity.
The low-hanging fruit for marketers who want to improve their AI indexing is to make sure their own sites are open to those crawlers. Many find their own sites block the crawlers by default, and so aren’t indexing well.
There also might be other important online sites or apps that aren’t being picked up.
Say your product has a glowing review on a well-respected, independent blog. If that blog isn’t crawlable to LLMs, it’s also going to be missed. “The brand might want to recreate or syndicate that somewhere else,” Stempeck said.
The LLM bouquet
One challenge (and opportunity) with these new LLMs is the variety of responses.
For a TV brand client, Stempeck said doing thousands and thousands of queries across the LLMs shows the degree to which one might favor metadata and reviews that focus on video game compatibility, say, while others clearly don’t.
In automotive brand queries, brands can identify particular niche publishers that overindex for certain LLMs, he said.
The LLMs are also not independent businesses.
Llama, created by Meta, puts great store in Instagram and Facebook engagement.
Stempeck said one shoe brand had been stymied by low ratings and poor descriptions on Llama, which it turned out came back to a flurry of reviews and conversations about poor arch support on Instagram. The company put out its own series of content, including a podiatrist discussing the arch support benefits of the shoe.
Other brands that don’t show up well on Gemini, which is Google’s LLM, are posting more to Reddit forums and other authenticated Q&A platforms like Quora, he said. Google has signed direct data licensing deals with those platforms, he added, and it shows in the Gemini results.
The new dynamics of generative AI search responses can be a miss for the open web. Many news publishers and other digital media companies block the LLM crawlers on principle, and those may not factor in so much to generative AI query responses.
“Which publishers do the best job of educating the LLMs? Which are blocking the LLM crawlers and which are being read?” Stempeck said. “That’s what advertisers are asking for help to try and understand.”