Threads Will Have Ads Next Year; Perplexity, Gen-AI Search Engine, Already Does
Threads will introduce ads to capitalize on users fleeing X; Perplexity tests ads and sponsored queries; and Amazon pulls the plug on Freevee.
Threads will introduce ads to capitalize on users fleeing X; Perplexity tests ads and sponsored queries; and Amazon pulls the plug on Freevee.
Instead of erasing the idea of brand safety, we should be developing smarter, more nuanced solutions that protect both news publishers and advertisers.
In today’s newsletter: Google Performance Max enables third-party brand safety measurement for YouTube; gen AI firms roll out new data-scraping bots to replace those blocked by publishers; and RAG deals give publishers more leverage in licensing their content to gen AI.
Most digital marketers know the importance of personalized ad creative. But even those brands often use a one-size-fits-all landing page. That’s the problem startup Fibr hopes to solve.
Many AI tools analyze and make decisions based on large amounts of data, or quickly generate creative content. AdCreative.ai, however, wants to do both.
Paul Pallath, VP of applied AI at cloud consultancy Searce, spoke with AdExchanger about a few hypothetical – but very possible – ethical scenarios a marketer might face when using generative AI.
AI-driven creative automation company Creatopy, which raised $10 million in series A funding earlier this month, aims to make it easier for marketers to create and personalize their content.
Marketing analytics and ad ops teams are overwhelmed with data, which is compounded by the accelerated pace of generative AI-produced content.
Here’s today’s AdExchanger.com news round-up… Want it by email? Sign up here. All Is Fair In Love And Panels Nielsen has changed its mind: It won’t force Amazon streaming data into its TV ratings, Ad Age reports. After over a week of drama, which involved exchanging letters with the Video Advertising Bureau, Nielsen is going back […]
The Washington Post is experimenting with a variety of large language models–but setting boundaries and guidelines to keep them in check.