
AI: OpenAI's next mega-deal with AMD. RTZ #867
OpenAI continues to move financial market mountains to scale up its AI models, applications and compute. First with Nvidia, now with AMD. Along with of course ramping up the gargantuan amounts of power investments I discussed just a few days ago.
As I’ve discussed before, the ‘Frenemies’ nature of the AI industry in this AI Tech Wave accelerated to another level, as OpenAI took a 10% stake in Nvidia AI GPU rival AMD and committed to a 6 Gigawatt+ multi-year AI data center buildout worth several hundred billion in AI Infrastructure and Power investments.
This only days after Nvidia invested in a $100 billion+ multi-year stake in OpenAI for the latter to use millions of Nvidia AI GPUs in multiple gigawatt AI Data Center infrastructure. All that also worth hundred of billions over the next few years.
And accelerating the diversification of sourcing on the core inputs, from AI GPUs and memory chips to data center infrastructure and all sources of power.
“Keith Heyde stands on site in Abilene, Texas, where OpenAI’s Stargate infrastructure buildout is underway. Heyde, a former head of AI compute at Meta, is now leading OpenAI’s physical expansion push.”
From the outside, this pace and scale of deals looks frenetic, formidable, and fantastical. From the inside, it’s the only way forward in an industry that is convinced that the unprecedented financial risks and execution challenges, is less risky than not investing.
Some details of the OpenAI-AMD deal are worth noting.
The WSJ lays it out well in “OpenAI, AMD Announce Massive Computing Deal, Marking New Phase of AI Boom”:
“The five-year agreement will challenge Nvidia’s market dominance, as OpenAI plans deployment of AMD’s new MI450 chips.”
“OpenAI and AMD formed a multibillion-dollar partnership for AI data centers, with OpenAI committing to purchase 6 gigawatts of AMD’s MI450 chips.”
“OpenAI and chip-designer Advanced Micro Devices announced a multibillion-dollar partnership to collaborate on AI data centers that will run on AMD processors, one of the most direct challenges yet to industry leader Nvidia.”
“Under the terms of the deal, OpenAI committed to purchasing 6 gigawatts worth of AMD’s chips, starting with the MI450 chip next year. The ChatGPT maker will buy the chips either directly or through its cloud computing partners. AMD chief Lisa Su said in an interview Sunday that the deal will result in tens of billions of dollars in new revenue for the chip company over the next half-decade.”
“The two companies didn’t disclose the plan’s expected overall cost, but AMD said it costs tens of billions of dollars per gigawatt of computing capacity.”
The financial details are designed to grow with execution:
“OpenAI will receive warrants for up to 160 million AMD shares, roughly 10% of the chip company, at 1 cent per share, awarded in phases, if OpenAI hits certain milestones for deployment. AMD’s stock price also has to increase for the warrants to be exercised.”
And of course it gives AMD a major foot into the explosive market for AI chips, particularly in this case for the inference side of the massive intelligence tokens required in AI Compute. Especially given the growing entrants in the race:
“The deal is AMD’s biggest win in its quest to disrupt Nvidia’s dominance among artificial-intelligence semiconductor companies. AMD’s processors are widely used for gaming, in personal computers and traditional data center servers, but it hasn’t made as much of a dent in the fast-growing market for the pricier supercomputing chips needed by advanced AI systems.”
“OpenAI plans to use the AMD chips for inference functions, or the computations that allow AI applications such as chat bots to respond to user queries. As the profusion of large language models and other tools has picked up, demand for inference computing has skyrocketed, OpenAI Chief Executive Officer Sam Altman said in a joint interview with Su.”
“It’s hard to overstate how difficult it’s become” to get enough computing power, Altman said. “We want it super fast, but it takes some time.”
The deal aligns the two companies together for at least a few years going forward:
“The two CEOs said the deal will tie their companies together and give them incentives to commit to the AI infrastructure boom. “It’s a win for both of our companies, and I’m glad that OpenAI’s incentives are tied to AMD’s success and vice versa,” Su said.”
And it doesn’t take away much on the Nvidia side ot the equation at a time of extreme supply constraints for the industry at large:
“Nvidia remains the preferred chip supplier among AI companies, but it is also facing competition from almost every corner of the market. Cloud giants such as Google and Amazon design and sell their own AI chips, and OpenAI recently signed a $10 billion deal with Broadcom to build its own in-house chip. Nvidia is releasing its highly-anticipated Vera Rubin chip next year, promising that it will be more than twice as powerful as its current generation, known as Grace Blackwell.”
“OpenAI will begin using 1 gigawatt worth of the MI450 chip starting in the second half of next year to run its AI models.”
“Altman said the fate of many companies will increasingly be linked as demand for AI services, along with the computing and infrastructure needs that accompany them, is set to far outstrip supply.”
“We are in a phase of the build-out where the entire industry’s got to come together and everybody’s going to do super well,” Altman said. “You’ll see this on chips. You’ll see this on data centers. You’ll see this lower down the supply chain.”
As I noted a few days ago, this deal continues to be another important step in Altman’s roadmap on a number of AI objectives:
“Altman has been on a dealmaking spree over the past month, at times using creative financing structures to secure hundreds of billions of dollars worth of computing power. His aim is to lock up enough data center capacity to win the race to develop superintelligence, or AI systems that rival humans for reasoning and intuition.”
“In late September, Nvidia announced that it would invest $100 billion in OpenAI over the next decade. Under the terms of the circular arrangement, OpenAI plans to use the cash from Nvidia to buy Nvidia’s chips and deploy up to 10 gigawatts of computing power in AI data centers. The deal highlighted how the market’s seemingly endless enthusiasm for Nvidia’s stock is providing a financial backstop for the entire AI market.”
The broader concerns for now will of course continue to center around chatter around the industry overbuilding in the midst of a growing ‘AI Bubble’.
“The dealmaking frenzy, which has drawn much of the technology industry into the maelstrom, has contributed to growing fears that a bubble is building in AI infrastructure. Companies such as OpenAI, Meta, Alphabet and Microsoft are spending money on chips, data centers and electrical power at levels that dwarf the largest build-outs in history, including the 19th century railroad boom and the construction of the modern electrical and fiber-optic grids.”
“I’m far more worried about us failing because of too little compute than too much,” said Greg Brockman, OpenAI’s president and co-founder.”
And of course ongoing questions on how and when OpenAI manages to pay for it all:
“It is unclear how OpenAI will pay for the hundreds of billions of dollars worth of infrastructure investments to which it has committed. The startup recently told investors that it was likely to spend around $16 billion to rent computing servers alone this year, and that the number could rise to $400 billion in 2029, The Wall Street Journal reported. OpenAI is on pace to generate $13 billion in revenue this year, and Altman said the company is focusing more on profitable tasks that can be accomplished using its tools.”
Nvidia of course continues to be in the pole position through it all:
“Mizuho Securities estimates that Nvidia controls more than 70% of the market for AI chips, though AMD and other rivals have sought in recent years to offer more affordable alternatives. Nvidia’s most powerful combination chips for use in AI data centers can cost $60,000 each.”
And OpenAI is manifesting itself to be at least equal to any ‘Mag 7’, especially as it diversifies away from Microsoft as its sold AI data center partner. On a fast ramp to AGI and/or Superintelligence.
The chip diversification makes sense in a world of AI Compute shortage for training and inference as far as the eye can see. At least the next 3-5 years, as the major industry participants scale everything from Box 1 to the holy grail Applications layer Box 6 in the AI Tech Stack below.
Understanding the full stack picture is critical to internalizing OpenAI’s execution math in this AI Tech Wave.
They’re punching way above their weight, and are executing on multiple concurrent fronts. Any one of which would be daunting enough for any individual tech startup in an earlier tech wave era. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)