AI: Weekly Summary. RTZ #899
-
Nvidia can’t sell into China, for now: In the end, the Trump administration ‘torpedoed; Nvidia’s Push to export AI Chips to China. Despite valiant efforts by founder/CEO Jensen Huang to lobby against this, going as far as to host an Nvidia GTC conference in Washington DC, on the eve of the President’s meeting in South Korea with China President Xi. The White House move had bipartisan support, especially amongst China hawks that view China as an ‘AI National Security Threat’, in what they view as a ‘race’ akin to the ‘Space Race’ with the Soviets. The move, if sustained, poses serious long-term risks to Nvidia’s global leadership in AI software and hardware platforms. Particularly since China is sure to accelerate an open source movement around its own AI chips that could rival Nvidia’s CUDA AI software moat that took it almost two decades to build. More here.
-
Microsoft outlines ‘Post-OpenAI’ MAI strategy: Microsoft is moving rapidly to outline its own AI strategy post inking the redrawn deal with OpenAI. Led by its AI Chief Mustafa Suleyman, the Microsoft Humanist AI Superintelligence effort (MAI) was rolled out by the company to build top AI systems and differentiate them vs its longtime AI partner. Of course the CEOs of both Microsoft and OpenAI continue to press on the ongoing integrity of their newly redefined relationship. With Microsoft having access to OpenAI until 2032 or when OpenAI AGI is declared by an independent panel. And OpenAI has committed to using another $250 billion in Microsoft Cloud services as its ramps up over $1,4 trillion in AI Data Center, Chips, and Power deals with a range of partners. But the concurrent reality is that Microsoft has to increasingly make its own AI technologies to put behind its Copilot and other AI products and services for its global enterprise customer base. More here.
-
OpenAI’s AI Instrastructure ‘backstop’ kerfuffle: OpenAI continues to get more attention on the breathtaking ramp of AI Compute build that now crosses over $1.4 trillion in the coming years. In particular, the company’s CFO caused a media kerfuffle when discussing possible government ‘backstops’ as OpenAI and its peers ramp up AI infrastructure in the US, particularly in the perceived AI Race with China. OpenAI founder/CEO Sam Altman explained later that the comments referred to US government AI priorities, and not to OpenAI’s commitments as a private customer for its own core businesses. The administration had to step in with its own comments. The incident highlights the ongoing questions by investors and the media on how AI ramps up to extraordinary revenue levels to pay for the trillions in AI Compute investments ahead. More here.
-
Amazon’s ‘Cease & Desist’ vs Perplexity AI Browser: Amazon jumped in aggressively in the fast rapid area of AI Browsers, with a ‘Cease and Desist’ letter against Perplexity’s use of AI Agents with its Comet AI Browser. The move marks the complex coming battle of overlapping business interests, as AI Agents, particularly via AI browsers, increasingly race to complete agentic ecommerce transactions for mainstream users. These moves could rapidly disintermediate whole layers of online ecommerce platforms, especially Amazon, diminishing their direct relationships with mainstream users on their websites. This of course would mean far less opportunities to access customer data, guide customers to other products and services on their sites, and overall lose meaningful platform advantages as mainstream, horizontal AIs build direct relationships with their core customers. This is the beginning of a tussle that is akin to the LLM AI companies’ issues with content and copyright owners over what is ‘Fair Use’ of resources on the internet.. More here.
-
Small AIs (SLMs) ramp vs LLM AIs: Small AI models (aka SLMs), are ramping up even as most investor and media attention remains on large language models (LLM AIs). This is especially the case with Apple, Google, Microsoft and others who are increasingly embedding smaller models in their local devices. And businesses are turning to smaller, more efficient models for local, private and more efficient AI Inference and other AI techniques that provide far better results relative to just LLM AIs. In addition, these models are also increasingly open source, which enhances their appeal and economics. China has taken a lead on open source, smaller models of late, even as US companies like Meta and others continue to be focused in this area. SLMs are newer relatively speaking to LLMs, but are growing faster in their capabilities and applications. More here.
Other AI Readings for weekend:
-
Apple to use Gemini AI for new Siri, paying Google $1+ billion. More here.
-
AI Job Loss Fears may be premature. More here.
(Additional Note: For more weekend AI listening, have a new podcast series on AI, from a Gen Z to Boomer perspective. It’s called AI Ramblings. Now 28 weekly Episodes and counting. More with the latest AI Ramblings Episode 28 on AI issues of the day. As well as our latest ‘Reads’ and ‘Obsessions’ of the Week. Co-hosted with my Gen Z nephew Neal Makwana):
Up next, the Sunday ‘The Bigger Picture’ tomorrow. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)