Don’t get too used to ‘subsidized’ chatbot costs

America post Staff
6 Min Read



Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here.

The cost of AI will surely rise, along with our dependence on it

Developing AI models and serving AI apps is a notoriously expensive undertaking. AI labs use massive amounts of computing power, training data, and high-priced talent to create and serve AI models, and the costs are not nearly covered by the chatbot subscription and API fees they bring in. Neither OpenAI nor Anthropic, for example, are profitable, and won’t be for some time. The difference, for now, is made up by investment money, much of it from venture capital firms. But that won’t last, of course. As AI companies mature, they’ll be expected to make returns on all the investment money they’ve taken. And the prices consumers and businesses pay for AI will almost certainly go up. 

It fits the model. Silicon Valley’s canonical playbook is to sell an app or service cheaply at first to build a large user base, then raise prices and, often, let the customer experience slip. In the early 2010s, for instance, Uber heavily subsidized fares with venture capital as it scaled its network of riders and drivers. In some markets, drivers received the full fare plus bonuses of up to 50%. By the late 2010s, as investors pushed toward a 2019 IPO, Uber began sharply increasing prices. Between roughly 2018 and 2022, fares rose by 50% to 80%, depending on the study, with further increases since. Many startups, including Amazon, Netflix, Airbnb, Instacart, and DoorDash, have followed versions of this model.

Some of the same big VCs that funded these “growth-at-all-costs” companies are now bankrolling today’s AI companies. For example, Khosla Ventures and Sequoia Capital invested in Uber and are now backing both OpenAI and Anthropic, among other AI labs. Andreessen Horowitz (a16z) invested in Uber (and other Uber-like startups) and now backs OpenAI and numerous other AI app and infrastructure companies. The main difference between the Ubers of the past and the AI companies of today is that the AI companies also take investment money from their big tech business partners (like Microsoft and Nvidia) as well as from private equity giants like TPG and Bain Capital. 

I see another similarity. Kara Swisher once quipped that with the rise of Uber, Instacart, and other app-based services in the 2010s, San Francisco began to feel like “assisted living for millennials.” What she meant was that these companies offered a cheap—at least initially—way to outsource everyday physical tasks, from grocery shopping to getting around to making dinner or going out to a movie. You could sit on the couch, tap your phone, and it was done for you. The convenience was undeniable, and during the pandemic it often felt essential. But it also nudged people toward a more sedentary, phone-mediated existence. And, as with so many of these services, the costs eventually rose, claiming a larger share of users’ paychecks.

AI chatbots and related tools may point to a similar, or even more troubling, trajectory. They can speed up information retrieval and automate a share of routine cognitive work. But as the major AI labs themselves have suggested, intelligence is becoming a commodity, something available on demand. The temptation, then, is to offload more and more of our own thinking and reasoning as these systems improve, outsourcing not just tasks but the mental effort behind them.

MiniMax says its newest AI model helped build itself

A new AI model from the Chinese AI startup MiniMax played a major role in its own development, the company says. The model, called MiniMax M2.7, can reportedly test itself on tasks and knowledge areas, diagnose its limitations, then improve itself automatically. MiniMax calls the concept “self-participation iteration.”

MiniMax says M2.7 handled between 30% and 50% of its own development work. For example it ran more than 100 loops of self-analysis and debugging, then iterative self-improvement without human intervention. As a result, the model hit benchmark scores comparable with the best Western AI models. M2.7 scored 56% on SWE-Pro (a difficult, realistic coding benchmark), MiniMax says. OpenAI’s GPT-5.2 “Thinking” model scored roughly 55%, while Anthropic’s Claude Opus 4.5 scored 52%. 

Normally, AI labs rely on human engineers to design and run evaluations on models to find shortcomings, then make improvements that eventually packaged up into a new version release. The idea of a continually self-improving model calls into question the need for new product releases, and points to a time when models simply improve on their own over time. 

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.



Source link

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *