No Image Available

Booming AI IPOs & corporate investments. ARD #73 podcast

The frame running through every item today: The private-market AI boom is now accelerating into the public market and corporate-strategic side.

It’s not just about the three mega AI tech IPOs lined up for end of year — SpaceXAI, Anthropic, and OpenAI.

A host of companies up and down the AI tech stack are now poised to come public, both in the US and around the world. Especially in China — the world’s second-largest AI market, with over half the world’s AI researchers.

A reminder before getting into it: financial cycles need to be analyzed separately from secular tech trends.

Both public and corporate strategic markets currently runing optimistic valuations. It’s the nature of these waves — especially during the momentum phase.

Financial expectations run ahead of hardware and software innovations most of the time. Not financial advice — but this rule of thumb is worth keeping in mind, whether one is a short or long-term investor.

This is one of those tech-wave inflection points where momentum on the private side is running on investor enthusiasm momentum. As is the corporate strategic investors phase, where Nvidia and other big techs are stepping on the investment gas. The result: valuations continue to boost up on bigger amounts being raised. IPOs are being lined up and prepped for, and strategic-investor priorities are leaning in each quarter on an accelerated clip.

Three Key Takes today underline the theme — plus a Gadget AI on Venmo’s privacy moves, the Musk-OpenAI/Brockman-diary discovery wrinkle, and an important Pro Tip on AI + Attorney-Client Privilege for Knowledge Workers:

(1) AI Chip Company Cerebras Systems IPO This Week. Cerebras — the AI chip company that’s been a private-market story for years — is now in market, inching toward a ~$35 billion valuation and a ~$5 billion raise. Cerebras competes with Nvidia on the inference-chip side with a different architectural mousetrap — built to handle the token-generation load when millions of users type queries into AI search engines and agent systems. Chips are just the first box in the six-box AI Tech Wave framework — innovations across every box are now seeing accelerating technical innovation and financial-market interest at the same time. What’s different in this wave: the race for AI Data from billions of user inputs producing the inference data, and the AI Compute needed to generate the tokens to process those requests. Expensive and getting more so, in a non-zero-sum AI compute market that has investors leaning into these IPOs with enthusiasm. The reporting: Bloomberg — AI Chipmaker Cerebras Systems Seeks $4.8 Billion in Upsized IPO. Today’s AI-RTZ companion: AI-RTZ #1083 — The AI Investment FOMO Around AI Supply Chains. Standing thesis on the AI Tech Wave framework: AI Building Value Over Time.

MP Take: Chips are just the first box in the six boxes in the AI Tech Wave above. Innovations across the boxes are going to see accelerating technical innovations, and financial market as well as strategic investor interest in them.

Different this time vs earlier tech waves is the race for AI Data — which comes from billions of user inputs into AI chatbots and agent systems. And their need for AI Compute to generate the tokens needed to process user requests. That’s very expensive and getting more so, driving startups to see higher valuations from private, public, and strategic investors.

(2) China’s Market Seeing a Range of AI Spinoffs and IPOs Getting Ready. China has been and continues to be a vibrant place for tech and AI innovations. They’ve already given the world TikTok’s FYP algorithms, Temu, Shein, DeepSeek, Alibaba’s Quen, Manus, and so much more. As Jensen Huang of Nvidia often reminds us, China has over half the world’s AI researchers and is the second-largest global AI market after the US. Kuaishou — the TikTok rival — is now planning to spin out its Kling AI video unit at a $20 billion valuation. That’s just one signal. Across mainland China and Hong Kong, AI IPOs are stacking up alongside spinoffs from the tech incumbents (Alibaba, Tencent, ByteDance) and from non-tech companies pivoting into AI. Geopolitical wrinkle: the upcoming Trump-Xi meeting later this week could shape what comes next — broad trade issues, geopolitical, tech, AI. And recall the Manus AI-agent example MP mentioned on-air: Zuckerberg tried to buy it for $3 billion before the Chinese government unwound the deal — likely a bargaining chip in the upcoming discussion. The reporting: The Information — China’s Kuaishou Plans to Spin Off Kling AI Video Unit at $20 Billion Valuation. Standing theses on US-vs-China AI competition: AI-RTZ #1041 — Add AI Tokens to US-vs-China AI Race and AI-RTZ #1075 — China’s Post-Meta/Manus Red-Chip Moment.

MP Take: China is already seeing an AI IPO boomlet — with companies both on the mainland and in Hong Kong. Also in the works are a range of spinoffs, where both tech incumbents like Alibaba, Tencent, ByteDance are active participants — as well as other China companies that have non-tech backgrounds.

Expect this area to see more activity this year and next. US investors are increasingly restricted from investing in China due to geopolitical issues — which means a meaningful chunk of global AI-IPO supply will clear into a buyer pool that doesn’t include US capital. Structural feature of this wave, not a temporary one.

(3) Nvidia Accelerating Corporate Strategic Investments to Over $40 Billion in 2026. The corporate-strategic side is moving just as fast as the public side — and Nvidia is the proximate engine. Nvidia is on pace for over $40 billion in equity investments in 2026 — a step-function jump from prior years. The structural context: Nvidia is generating $100 billion-plus in excess cash and planning to invest more than half of that this year — across the AI tech stack from chips to applications to data networking. The pattern is increasingly “circular”: investing out of the left pocket and taking some of that money back in the right pocket, because the companies they invest in then buy Nvidia chips (directly or via cloud providers). Unlike its big-tech peers, Nvidia doesn’t have to plow that cash back into its own AI data center capex. They’re the ones selling the pickaxes and shovels in the AI Gold Rush. Their priorities to invest at higher valuations are very different from financial investors’ — an obvious point but an important one to keep in mind, especially in the momentum phase of these cycles where private, public, and strategic investors are all leaning in and there’s less focus on why the valuation numbers keep going higher. The reporting: CNBC — Nvidia Embraces AI Investor Role, Topping $40 Billion in Equity Bets in 2026. Standing thesis on Nvidia as AI “Kingmaker”: AI-RTZ #1036 — Nvidia’s Accelerating Role as the Kingmaker.

MP Take: Nvidia is generating excess cash like Apple, and not having to expend it like its other big tech peers into AI Data Center Infrastructure. They’re the ones selling the pickaxes and shovels in the AI Gold Rush.

Thus they’re in a prime position to invest in companies up and down the tech stack — particularly as many of those investments come back as orders for Nvidia AI infrastructure. Either directly or indirectly. Expect this trend to continue in the current momentum trends.

Plus: Gadget AI — Social Media and AI Privacy Don’t Mix: Venmo’s Latest Moves. PayPal-owned Venmo is starting to pay more attention to privacy in the AI age — redesigning the app to step back from the public-feed-by-default model that defined it for years. Default settings are being turned off because the public-feed-of-payments model is causing notable issues for users, especially as AI applications get added to these systems. Separately, the Musk-OpenAI legal case is surfacing how AI chatbot evidence is showing up as legal exhibit in high-profile disputes — this past week revealed personal diaries and journals from OpenAI’s #2 co-founder Greg Brockman coming out in discovery. The common thread: fundamental areas where tech ‘innovation’ is far ahead of societal and legal norms and expectations. These tend to get attention and resolution AFTER negative outcomes — not in advance. The reporting: The Verge — Venmo Finally Takes Privacy Seriously and Axios — Musk-OpenAI Case Shows Chatbot Evidence Risk. Standing thesis on AI trust: AI-RTZ #382 — Scaling AI: Trust Is Job #1.

MP Take: This is a broader issue than social media and AI. There are fundamental areas where tech ‘innovation’ is far ahead of societal and legal norms and expectations. Unfortunately, these will get attention and resolution AFTER negative outcomes. Not, as it generally should have been, in advance.

Pro Tip from MP: Attorney-Client Privilege is lost if those discussions are fed into AI chatbot or Agentic systems. Once input, they’re open game for the legal discovery process. Not important for mainstream users. But absolutely something to keep in mind for most Knowledge Workers — especially in a world where employers are increasingly tracking every keystroke on corporate computers and smartphones. Not just for corporate governance, but increasingly to train corporate AI systems. The AIs seem so friendly, so trustworthy — but it’s not because of them; it’s because of the way the systems are managed by the companies that run them and the legal systems that apply to them. Almost everything you put in them is discoverable.

Bonus — today’s AI-RTZ companion #1083 covers the AI investment FOMO around AI supply chains — a deeper read on the private-market AI boom now spilling into US public markets and Chinese AI IPOs, alongside Nvidia’s $40B+ strategic-investor checkbook reshaping the whole supply chain.

Closing Questions —

  • What AI tools and functionality does MP NOT use due to privacy questions? Talking to AI systems vs typing. MP types AI queries rather than talking to them — he’s more careful about what he says when he’s typing into a chat window than when he’s voice-querying. The new wave of “whispering into AI systems” (per the WSJ feature this week) creates social and privacy frictions that typing simply doesn’t. The shift from typing to voice/whisper isn’t just a UX preference — it’s an exposure-surface change. Source: WSJ — Typing Is Being Replaced by Whispering — and It’s Way More Annoying.

MP Take: The exposure-surface change matters more than the UX (user experience) delta. Typing keeps the exposure narrow. Whispering or talking adds ambient audio, contextual signal, and a different evidence trail — the same kind of trail showing up as legal exhibit in the Musk-OpenAI/Brockman discovery case. Pattern worth watching as voice becomes the default AI input modality on phones and wearables.

  • What AI tools and functionality does MP use DESPITE privacy questions? Connector software — connecting AI systems into daily applications via MCP protocols, Chrome extensions, and similar integrations. The leverage is real enough that MP uses it even knowing the privacy trade-offs. Carefully. Almost every AI system you use — ChatGPT, Claude Code, Claude Cowork, Perplexity’s computer system, Google Gemini — wants to connect to all your applications to access your data sources. Increasingly you have to do that if you want to get value out of these systems.

MP Take: Even MP, who’s thought hard about this, accepts some privacy exposure where the productivity gains are clear and the alternative is giving up the leverage entirely. The right framing isn’t “use vs don’t use” — it’s “which exposures are acceptable for which gains.” That calculation lands differently for every user, every workflow, every company. MP exercises good care and checks the integrations regularly to make sure they’re updated as needed. No universal answer here. J

ust do it carefully. And review them often. Stay tuned


(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)

Shorts Clips from today’s episode

Short — Nvidia’s $100B AI Pickaxes-and-Shovels Cash Nvidia is generating $100B+ in excess cash and planning to invest over half of that in 2026 — across the AI tech stack, in increasingly circular deal patterns. They invest out of the left pocket, take that money back in the right pocket as the companies they fund buy Nvidia chips (directly or via cloud providers). Unlike big-tech peers, Nvidia doesn’t have to plow excess cash back into its own AI data center capex.

MP Take: Nvidia is generating almost as much excess cash as Apple, and not having to expend it like its big-tech peers into AI Data Center Infrastructure. They’re the ones selling the pickaxes and shovels in the AI Gold Rush. Prime position to invest up and down the tech stack — particularly where those investments come back as orders for Nvidia infrastructure, directly or indirectly. Expect this trend to continue in the current momentum trends.

Short — Venmo Quiets the Public Feed in the AI Era PayPal-owned Venmo is starting to pay more attention to privacy in the AI age — redesigning the app to step back from the public-feed-by-default model. Default settings are being turned off because the public-feed-of-payments model is causing notable issues for users, especially as AI applications get folded into these systems.

MP Take: Broader issue than social media and AI. Fundamental areas where tech innovation is far ahead of societal and legal norms and expectations. Unfortunately these get attention and resolution AFTER negative outcomes — not in advance, as they generally should be.

Short — Why MP Types — Not Talks — to His AIs MP’s privacy rule for AI tools: type, don’t talk. He’s more careful about what he says when he’s typing into a chat window than when he’s voice-querying — and the new wave of “whispering into AI systems” (per WSJ this week) makes the exposure-surface change worse, not just a UX preference. Voice queries leave a different kind of trace.

MP Take: Pattern worth watching as voice becomes the default AI input on phones and wearables. Same theme as the broader Pro Tip from this episode: Attorney-Client Privilege is lost the moment you feed those discussions into AI chatbots or Agentic systems. Open game for legal discovery. Important for Knowledge Workers, especially as employers increasingly track every keystroke on corporate computers and smartphones.

Short — Musical Chairs vs AI’s Open-Ended Tech Wave Every investor — institutional or individual — typically thinks they’re smart enough to figure out when the music is going to pause or slow down and get off the musical chairs game. Sometimes that happens; often it doesn’t. We’re at that stage of the cycle now. That said: the technical innovations on a bottom-up basis are so open-ended that MP remains net optimistic this AI tech wave will produce very interesting products and services. Wider open eyes from here.

MP Take: Right now there’s a happy alignment where secular reasons drive enthusiasm — fourth year of ChatGPT, lots of innovation still ahead. Most startups are interesting. Just the valuations are high and exit scenarios increasingly lean on big-tech acquihires + strategic M&A. As an investor — direct or through funds — things to keep in mind. Music plays on, for now.

Ep 73 scope: 1 Main + 4 Shorts — MP’s standing pre-recording scope. No Segments or Hooks (matches Ep 62-72 4-clip pattern — 12th consecutive episode).


About AI Ramblings Daily (ARD), and AI-RTZ

Both are daily. Both are free. Both are about AI. But they’re different mediums carrying different messages.

AI-RTZ is the morning text — a deeper written take on one idea, published by at least 5 AM EST. Today: post #1083 — The AI investment FOMO around AI supply chains.

AI Ramblings Daily is the afternoon video + podcast — my ad hoc takes and perspective on the day’s AI issues & news flow, around 16 minutes today, with short 1-minute clips for quick topic views. Today: episode #73.

Subscribe to either or both on michaelparekh.substack.com. They run as separate Sections you can opt into or out of.





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here