AI: OpenAI's 'good news, bad news'. RTZ #595
We may have a glimpse of an answer in this AI Tech Wave, on the latent demand for new AI products and services. And it may eventually settle the debate over the relentless AI capex spend by big tech and others. And there is a ‘good news, bad news’ aspect to that answer of course.
Techcrunch lays it out in “OpenAI is losing money on its pricey ChatGPT Pro plan, CEO Sam Altman says”:
“OpenAI CEO Sam Altman on Sunday said that the company is currently losing money on its $200-per-month ChatGPT Pro plan because people are using it more than the company expected.”
“I personally chose the price,” Altman wrote in a series of posts on X, “and thought we would make some money.”
This is of course referring to the AI Reasoning and Agents product around OpenAI’s o1-o3 ‘reasoning’ models. They’re increasingly the underpinning behind most of OpenAI’s upcoming products and services, whether offered directly via ChatGPT, or via APIs:
“ChatGPT Pro, launched late last year, grants access to an upgraded version of OpenAI’s o1 “reasoning” AI model, o1 pro mode, and lifts rate limits on several of the company’s other tools, including its Sora video generator.”
“ChatGPT Pro’s price point wasn’t a slam dunk at launch. It’s $2,400 per year, and the value proposition of o1 pro mode in particular remains murky. But judging by Altman’s posts, it seems that users who have bitten the bullet are making the most of it — at OpenAI’s expense.”
And it’s the stuff that is going to require far more ‘supersized’ AI Compute, driven by Nvidia’s latest Blackwell chips than ever.
It’s a point that Nvidia founder/CEO Jensen Huang covered in detail at his CES 2025 keynote this monday.
The underlying issue, the ‘bad news’ as it were, is that AI services have an accelerating usage driven variable cost associated with them, a topic I’ve discussed in detail before.
The more they’re used by end users, the more ongoing ‘Compute’ costs are incurred for AI Inference in the feedlack loops around LLM AIs (the lower loops in the chart below). They’re depicted in the AI Tech Chart below, complete with the various ‘variable cost’ activities listed in the Legends part of the chart.
Particularly important in the legend section are the ‘Test Time Compute (TTC)’, and the ‘Chain of Thought (CoT)’ components, that are the new, cutting edge AI techniques to make AI answers more ‘thoughtful’, ‘reasoned’, reliable, and customized to user requests. And will be driving more AI Agent driven activities soon, with even more variable costs.
The broader issue of price discovery for AI services, especially in terms of the price elasticity, is a topic I’ve also discussed before.
The Techcrunch piece again elaborates:
“It’s not the first time OpenAI has priced a product somewhat arbitrarily. In a recent interview with Bloomberg, Altman said that the original premium plan for OpenAI’s AI-powered chatbot, ChatGPT, didn’t have a pricing study.”
“I believe we tested two prices, $20 and $42,” he told the publication. “People thought $42 was a little too much. They were happy to pay $20. We picked $20. Probably it was late December of 2022 or early January. It was not a rigorous ‘hire someone and do a pricing study’ thing.”
“OpenAI isn’t profitable, despite having raised around $20 billion since its founding. The company reportedly expected losses of about $5 billion on revenue of $3.7 billion last year.”
But there in lies the ‘good news’ part of the story.
Yes, the bad. news of variable costs, and near-term operating losses aside, the good news is that there is potentially a large amount of latent demand for AI reasoning and agentic services. That users are using the service more than expected is good news in the long-term. It means customers want more. And it’s incumbent upon the companies providing the product/service to manage their operating cost efficiencies down.
Fortunately, that’s what technology products claim as their ‘Superpower’. Moore’s Law type cost and operating efficiencies that kick in once the flywheels are in motion. I’ve discussed OpenAI’s long-term flywheel before, using Amazon’s famous flywheel case study below.
Next below, is what OpenAI’s flywheel opportunity looks like. An example of recent partnerships can be seen in Step no. 2 with the OpenAI flywheel. As it potentially gets boosted by deals like the recent OpenAI/Apple ChatGPT integration with Apple Intelligence. There, billions of Apple users get introduced to OpenAI ChatGPT, and have an opportunity to become new subscribers. And can be further boosted with OpenAI/Apple Siri integration to come, with more features coming in iOS 18.4 in April.
But the key point here is that for now, at this point in 2025, OpenAi is in the relatively fortunate position of being the one in the lead, the one in the most demand.
Kind of like AOL in the 1990s when millions of mainstream users wanted to get online for the first time, and AOL was the most accessible option.
So, AOL was the primary way for mainstream users to get on line. But there too, was a variable cost of providing online/internet access. A challenge that AOL management had to mitigate at scale over time, while incurring accelerating and large short term losses. I was there as a lead internet analyst for that movie on the front lines, positive on the stock in those days through that tumultuous investment period.
OpenAI has a similar challenge and opportunity ahead over the next few years. And they’ve got to execute their way between the variable cost issues, and providing a service in great and accelerating demand at scale. Easier said than done.
But it’s an opportunity they’re laser focused on. Early on in this AI Tech Wave. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)