
Advertising in generative AI systems has become a fault line. Last month, OpenAI released that it would start running ads in ChatGPT. Speaking at the World Economic Forum in Davos, OpenAI’s chief financial officer defended the introduction of ads inside ChatGPT, arguing that it is a way to “democratize access to artificial intelligence,” and that this decision is aligned with its mission: “AGI for the benefit of humanity, not for the benefit of humanity who can pay.”
Within days, Anthropic fired back in a Super Bowl commercial, ridiculing the idea that ads belong inside systems people trust for advice, therapy, and decision-making. In some way, this is a spat about how each company is marketing itself. In another way, this debate echoes the debates about the early internet, but with far higher stakes.
The big question
The underlying question is not whether advertising generates revenue. It clearly does. But rather: is advertising the only viable way to fund AI at scale. And whether, if adopted, it will quietly dictate what these systems optimize for.
History offers a cautionary answer. The last several decades of online advertising has proven that when profit is decoupled from user value, incentives drift toward harvesting data and maximizing engagement—the variables that can be most easily measured and monetized.
That trade-off shaped everything in the internet economy. As advertising scaled, so did the incentives it created. Attention became a scarce resource. Personal information became currency.
What Google taught us
Google’s founders themselves acknowledged this risk at the dawn of the modern web. In their 1998 Stanford paper, Sergey Brin and Larry Page warned that ad-funded search engines create inherent conflicts of interest, writing that such systems are “biased towards the advertisers and away from the needs of the consumers,” and that advertising incentives can encourage lower-quality results.
Despite this warning, the system optimized for what could be measured, targeted, and monetized at the expense of privacy, transparency, and long-term trust. These outcomes were not inevitable. They flowed from early design choices about how advertising worked, data moved, and influence was disclosed.
A pivotal moment
Artificial intelligence now finds itself at a similar pivotal moment, but under far greater economic pressure and with far higher stakes. It is worth noting, artificial intelligence is not cheap to run. OpenAI projected that it will burn through $115 billion by 2029. Like internet users, AI users are unwilling to pay for access, and advertising has historically allowed the internet, and businesses depending on it, to scale beyond paying users.
If advertising is going to fund AI, personal data cannot be the fuel that powers it. If conversations on an AI platform leak into targeting data, users will stop trusting it and will start viewing it as a surveillance tool. Furthermore, once personal data becomes currency, the system inevitably optimizes for extraction.
That does not mean future advertisers on these AI platforms would have to operate in the dark. Brands will still need to know that their spending delivers results, and that their messages target users aligned with their values. It’s justifiable that brands need outcome measurement and contextual assurance.
The real problem
The irony in Anthropic’s critique is instructive. A Super Bowl commercial is itself a testament to advertising’s enduring power as a form of communication and cultural signaling. Advertising is not the problem. Invisible incentives are.
The way to satisfy both consumer trust and business growth is to build the advertising ecosystem on open, inspectable systems so that influence can be seen, measured, and governed without requiring the collection or exploitation of personal data. Standards such as the Ad Context Protocol sets out to do exactly this.
This is the window in which profit can still be aligned with value. At stake is the difference between advertising as manipulation and advertising as sustainable and enduring market infrastructure. The ad-funded internet failed users not because it was free, but because its incentives were invisible. AI has the chance to do better. The choice is ours to make.



