AI: Weekly Summary. RTZ #906

AI: Weekly Summary. RTZ #906

  1. Microsoft’s CEO outlines AI Strategy: Microsoft CEO Satya Nadella is rapidly evolving and explaining his AI strategy post the redrawn deal with OpenAI. This expands on Microsoft’s AI strategy rollout last week that I discussed here.. In a recent podcast, he goes into detail how Microsoft Azure is the better business model amongst its hyperscaler peers. Specifically, Microsoft’s approach of working with a range of LLM AIs both open and closed with a range of partners, provides a lot more flexibility for its enterprise customers around the world. And gives Microsoft the ability to deeply integrate the best of these models into Microsoft applications like Copilot for Microsoft Office/Excel and many others. Indeed, he makes the case for smarter, AI Agent driven ‘Scaffolding’ that provides far greater capabilities to its end customers vs single model vendors. Presumably like OpenAI, amongst others. More here.

  1. Meta’s Other Chief AI Scientist Leaves: Yann LeCun, Meta’s long-time Chief AI Scientist, and one of the original trio of AI Godfathers is leaving Meta soo to do his own startup. This was somewhat expected given Meta’s recent accelerated AI Talent grab across the industry, led by founder/CEO Mark Zuckerberg. Meta had already hired Shengjia Zhao, a key AI Researcher from OpenAI as its new Chief AI Scientist as part of its MSL (Meta Superintelligence team), led by former Scale Ai founder/CEO Alexandr Wang and his two co-heads Nat Friedman and Daniel Gross. Yann has increasingly been vocal about how LLM AI Scaling will NOT get the industry to the coveted goal of AGI (aka AI Superintelligence), and how additional AI Research breakthroughs are likely to be needed to build true AI World Models. The move is notable for the increasing schism amongst AI Researchers that AGI may take longer than current consensus driving the massive AI spend across the industry. More here.

  1. Perplexity Charges Ahead: Perplexity continues to be one of the most aggressive native AI companies. Note that it got served with a ‘cease and desist’ by Amazon for deploying its AI Agents working for users on Amazon sites masquerading as Chrome browsers. This while Perplexity is pressing on its Comet AI Browser launch recently, and trying to push its deployment ahead of competing AI browsers from OpenAI with its Atlas browser, and other competitors. Google is also rolling out more AI Gemini functionality within Chrome as well. Most of Perplexity’s peers are more conservative on their deployment practices. Perplexity is rapidly working along the old adage of doing things and asking for permission later. This is similar to smaller tech smartups being more aggressive in their early days. Uber in particular comes to mind in the Travis Kalanick days. More here.

  1. China races ahead on Auto Self-Driving: As Google’s Waymo and Elon Tesla/xAI’s Robotaxis continue to gather investment enthusiasm in the US, China is racing ahead with its own generation of auto self driving technologies. Companies like Baidu, Pony AI, WeRide, and others are deploying their versions in the thousands. And increasingly getting ready to export their versions into the overseas auto markets. Note that China is already ahead of the US on the deployment of EVs, and that is accelerating in the world’s second largest auto market. This remains important also due to China’s additional advantage in EV/Auto driving manufacturing ecosystem vs its US counterparts. Analysts are projecting that China’s robotaxi fleet will likely grow to tens of thousands bythe end of 2026, surpassing US deployments. More here.

  1. AI Depreciation Curves Latest Investor Issue: Investors’ latest worries over the AI Infrastructure Capex race is over the depreciation curves for the underlying AI GPU chips spend by big tech companies. Driven by Nvidia’s annual AI GPU Generation ramps, the depreciation schedules underlying the myriad AI infrastructure chip collateral financing deals typically have built in depreciation schedules for the AI GPUs. Those typically run from 2 to 7 years or more. Shorter schedules could play havoc with the financing assumptions, and longer schedules may run afoul of accounting prudence. This makes AI infrastructure different from the railroad boom in the 19th century and the internet telecom boom in the early 21st. Thus the quandary and the latest head-wind worry. More here.

Other AI Readings for weekend:

  1. Enterprise AI Agents are all so similar. More here.

  2. Nvidia’s new LLM AI investment spree. More here.

(Additional Note: For more weekend AI listening, have a new podcast series on AI, from a Gen Z to Boomer perspective. It’s called AI Ramblings. Now 28 weekly Episodes and counting. More with the latest AI Ramblings Episode 28 on AI issues of the day. As well as our latest ‘Reads’ and ‘Obsessions’ of the Week. Co-hosted with my Gen Z nephew Neal Makwana).

Up next, the Sunday ‘The Bigger Picture’ tomorrow. Stay tuned.

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)





Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here