Engagement-based advertising models are coming for AI

Engagement-based advertising models are coming for AI

When Alphabet reported a 14% spike in second-quarter revenue this year, Google’s boss rushed to praise the role of artificial intelligence (AI). The technology is “positively impacting every part of the business”, said CEO Sundar Pichai. But that isn’t the reality for most firms. What’s more, the AI investment market is veering deeper into bubble territory.

Torsten Sløk, the chief economist at Apollo Global Management, has warned that tech giants are staring at a brutal reckoning on Wall Street. “The difference between the IT bubble in the 1990s and the AI bubble today is that the top 10 companies in the S&P 500 today are more overvalued than they were in the 1990s,” he wrote to clients this summer, invoking the lead-up to the ruinous dot com crash. That market correction destroyed a generation of companies and erased $5tn in share value – roughly equal to $9.3tn today.

Others are more blunt. “I am not here to belittle AI, it’s the future, and I recognise that we’re just scratching the surface in terms of what it can do,” the chief of hedge fund Praetorian Capital has written about the hazy economics of data centres. “I also recognise massive capital misallocation when I see it. I recognise insanity, I recognise hubris.”

The grand payoff from generative AI (GenAI) is proving especially elusive. A study published by MIT in August suggests only 5% of businesses using it have seen rapid revenue acceleration.

Still, the tech industry is notching up imperfect progress in models’ capabilities. And personal use of AI by consumers is steadily growing. Executives are thus roaring ahead with their mammoth spending plans. Even without a clear path to profitability, leading US tech companies plan to drop a staggering $344bn into AI this year. That figure will reportedly rise to half a trillion dollars in 2026.  

“As long as we’re on this very distinct curve of the model getting better and better, I think the rational thing to do is to just be willing to run the loss for quite a while,” said OpenAI CEO Sam Altman the day after his organisation released GPT-5. OpenAI’s own projections show it’s expected to burn through $115bn by the end of 2029.   

“None of this means that AI can’t eventually be as transformative as its biggest boosters claim,” points out business writer Rogé Karma. “But ‘eventually’ could turn out to be a long time.” And tech companies’ meteoric valuations can’t defy economic gravity forever.

If you look at the trajectory of Google and Microsoft, it’s not a matter of ‘if’ ads end up in AI outputs, but how quickly and how deeply they get embedded
Adio Dinika, Distributed AI Research Institute

A foolproof way then to bankroll themselves within the attention economy is for tech companies to infuse advertising into models’ outputs. Indeed, some firms are already experimenting with the concept as they jockey to win the most costly industrial competition in history.

Revenue is king

The expansion of digital advertising is not an intrinsically bad thing. Done transparently and within clear guidelines, ads function as a beneficial form of free expression. They can also democratise consumer choice, drive innovation and spur competition within markets. Yet Meta’s recent decision to abandon its moratorium on advertising in WhatsApp is a possible harbinger of things to come.

Meta resisted introducing ads into WhatsApp for over a decade after acquiring the app in 2014. Its ad-free experience, after all, was key to it becoming the world’s most popular messaging service. But that all changed earlier this year. Seeking to bolster Meta’s war chest in the AI arms race, CEO Mark Zuckerberg lifted the ban in June. Markets instantly rewarded his decision with a 2.5% bump in Meta’s stock price.

Meta says ads on WhatsApp will not interrupt chats. Plus, users’ personal information won’t be given to advertisers. Yet Meta’s policy U-turn is a reminder that even the world’s tech juggernauts are beholden to market sentiments.

“If you look at the trajectory of Google and Microsoft, it’s not a matter of ‘if’ ads end up in AI outputs, but how quickly and how deeply they get embedded,” says Adio Dinika, a researcher at the Distributed AI Research Institute (DAIR). “The driver isn’t user benefit; it’s the survival of an ad-tech business model that has monopolised the internet for two decades.”

Others concur. “This shouldn’t be surprising,” says Daniel Barcay, executive director of the Center for Humane Technology, pointing to the evolutionary arc of social media. “The industry is moving from a phase of explosive expansion and onboarding to a phase of more zero-sum competition between AI platforms.

“We see this pattern over and over again,” he says, “precisely because the aggregate value of a technology product is far greater than the user subscriptions – as soon as growth slows, the race for monetisation becomes more vicious and more hidden.”

Elsewhere, a recently leaked memo from Anthropic CEO Dario Amodei confirms how easily ideals can be hollowed out. He and six of his colleagues founded Anthropic in 2021 after leaving OpenAI over concerns the latter was straying from its stated mission to develop safe, human-centred systems. Amodei even wrote last autumn that “AI-powered authoritarianism seems too terrible to contemplate”.

However, in a Slack message sent by Amodei to his staff in July 2025, he justified the company courting investment money from “dictators” in the United Arab Emirates and Qatar to remain a leader in AI. “This is a real downside and I’m not thrilled about it,” wrote Amodei. “Unfortunately, I think ‘No bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”

Monetising intimacy between users and machines

Citing the need for a “steady and scalable” revenue stream, Perplexity last November introduced ads into its AI-powered search results as prompts for sponsored follow-up questions. Google likewise began inserting sponsored content into its AI Overviews this past May. The search giant cites internal data that it claims shows users appreciate this because it helps them swiftly connect with relevant businesses, products and services.

Yet chatbots are even better at accruing what brands and advertisers seek most: intimacy and trust.

In his book Nexus, which explores how AI could radically reshape human information networks, historian Yuval Noah Harari invokes the unnerving example of former Google engineer Blake Lemoine. In mid-2022, Lemoine became convinced the chatbot he was working on, LaMDA, had become conscious, and it genuinely feared being disconnected. Lemoine was fired after going public with his feelings.

In the contest for hearts and minds, Harari writes, intimacy is a powerful weapon. “By conversing and interacting with us, computers could form intimate relationships with people and then use the power of intimacy to influence us,” he warns.

By conversing and interacting with us, computers could form intimate relationships with people and then use the power of intimacy to influence us
Yuval Noah Harari, historian and author of Nexus

This is already evident in the new phenomenon of so-called AI psychosis. The number of users caught in grandiose delusions of chatbots sending them on secret missions or forging connections with spiritual beings is skyrocketing. An even higher number are developing friendships and romantic entanglements. Too often, these scenarios end tragically.

In early August, OpenAI’s release of GPT-5 – which amalgamates the company’s prior model iterations under one program – angered hardcore ChatGPT users who had built a personal attachment to GPT-4o. The earlier model was widely criticised, including by Sam Altman himself, as being sycophantic.

“Even after customising instructions, it still doesn’t feel the same,” one Reddit user said about GPT-5 in a now-deleted post. “It’s more technical, more generalised, and honestly feels emotionally distant.” Another Reddit post reads: “For a lot of people, 4.0 [sic] was the first thing that actually listened … It responded with presence. It remembered. It felt like talking to someone who cared.”

OpenAI quickly reversed course after the backlash, enabling paid users to now self-select GPT-4o as their default version. This addictive hold that AI systems have over some users mirrors the toxic legacy of algorithmic targeting of content on social media platforms. And yet it also has the potential to go much further.

“The intimacy of conversational AI creates unprecedented vectors for exploitation, systems that know your sleep patterns, your relationship anxieties, your financial stress, your health fears,” says Dinika, the AI researcher. “When those vulnerabilities become targeting parameters for advertisers, we’re not talking about so-called ‘relevant ads’ – we’re talking about weaponised psychology at scale.”  

Indeed, AI ads can and will do far more than just inject links into text streams, predicts Barcay, from the Center for Humane Technology. “AI ad systems can subtly shift the tone, language and content of a conversation to elevate the prominence of products, industries, cultural figures or political parties. They can steer discussions with users towards or away from topics, amplify desires, invoke associations.” 

This could be aggravated further in a future when conversant humanoid robots take up the roles of assistants, educators and caregivers.

But policymakers still have a window to act. This is pertinent given how US courts have ordered Google to share search data with its rivals, liberating all kinds of new material for AI developers to expedite their projects.

“If policymakers have learned anything, it should be that disclosure has to be front and centre in the output itself, not buried,” says Dinika. He suggests placing strict limits on using conversational data for targeting while prohibiting advertising in sensitive areas like health, immigration, or finance.

AI’s immense capabilities and intimate access to consumers will also likely trigger deeper questions about the very nature of the advertising industry itself. “I imagine that many legal battles will be fought in this area in the years to come about what defines the limits of an ad,” says Barcay. This might encompass “what denotes proper disclosure, what aspects of a user’s interaction are fair game to be used, and what reinforcement signals can be used to tune a model towards persuasive salesman-like behaviours”.

Ultimately, regulators should get in front of AI advertising before any nascent problems grow too big to handle, advises Bloomberg tech columnist Parmy Olson, who argues that tech companies will inevitably claim that advertising is a necessary part of democratising AI. If not, she says, “we’ll repeat the mistakes made with social media – scrutinising the fallout of a lucrative business model only after the damage is done”.


Source link

About Cybernoz

Security researcher and threat analyst with expertise in malware analysis and incident response.