AI: Weekly Summary. RTZ #367

AI: Weekly Summary. RTZ #367

1. Microsoft’s AI ‘Copilot PC & Build Developer Conference: Microsoft unveiled their new AI ‘Copilot + PCs’, based on Qualcomm’s new low-power Snapdragon Elite X chips. This lineup with bottom up AI re-architected Windows 11 computers kicked off its annual Build developer conference. The new ‘Copilot + PC’ line, including the Microsoft Surface Pro tablet and Surface laptops, along with 3rd party OEM computers from Dell, HP, Lenovo, Acer, Asus, Samsung and others, feature advanced AI capabilities that operate from deep hardware integration in the computers at the silicon level. These computers feature trillions of operations per second (TOPs, a new AI PC metric), along with over 40 special purpose LLMs embedded deeply into windows and the chips to produce minimal latency, private and secure inference driven AI computations on local user data. This includes Microsoft’s recent SLM Phi-3 AI model, embedded into custom silicon in the computers. These innovations are part of Microsoft’s broader strategy to integrate AI into everyday computing, enhancing functionalities like real-time translation and document recall without needing an internet connection. This direction will likely be duplicated by Apple at its WWDC Developer conference coming up on June 10. More here.

2. Nvidia still driving AI train: Nvidia under CEO Jensen Huang, continues to dominate the AI chip market, with its latest financial results surpassing Wall Street expectations on most key metrics, from AI GPU system revenues, to AI data center revenues, to networking and cabling sales. The company also continues to drive over 40% of inference computations on its AI GPU systems, an ongoing metric the street is focusing on due to possible inroads by AMD, Intel and others. The company’s new B200 “Blackwell” chip, introduced at its recent GTC annual developer conference, with multiple fold improvement over its current Hopper H100 line of AI GPUs. This continues Nvidia’s ‘accelerated computing’ strategy into next year. All major cloud AI tech companies like Amazon, Google, Microsoft, and OpenAI are continuing to deploy Nvidia products for their AI training and inference solutions. Both cloud and ‘on-prem’. Thus for now, mitigating street concerns of a ‘sales blip’ in the product transition. More here.

3. Meta is trying both ends of AI pricing: Meta apparently is contemplating a premium tier for its Meta AI search service that it rolled out recently with Llama 3 open source LLM AI models in multiple sizes. If true, this would emulate the premium pricing subscriptions strategy being implemented by competitors like OpenAI, Microsoft, Google and others. In addition, releasing its latest Llama 3 in various open source sizes from small, medium to large means the company continues to be able to ‘throw sand in the gears’ of the cloud services models of its peers and competitors like Amazon AWS/Anthropic, Microsoft Azure/OpenAI, Google Gemini, and others. In the meantime, Meta continues to focus its efforts also on building better ‘reasoning’ approaches for its AI systems, led by AI chief Yann LeCun. More here.

4. Amazon AWS’s LA Developer Conference:  Amazon AWS had a developer summit in Los Angeles this week, with a keynote by VP for AI Products, Matt Wood. The conference featured demos and sessions on generative AI, predictive machine learning, and advanced data analytics, showcasing real-world applications and innovations. There were representative customer presentations by PGA Golf, Lonely Planet and others, demonstrating the iterative ways they’re leveraging Amazon AWS’s core and new AI services to use generative AI in their businesses. The keynote highlighted the latest trends, and offered opportunities for hands-on experience with AWS tools like Amazon Bedrock and SageMaker. More on Amazon AWS’s overall AI strategy here.

5. Anthropic’s new LLM AI ‘Reasoning’ Research:  Anthropic announced new AI research on better understanding how the neural AI matrix math in LLM AI systems really work . They outlined these advances in understanding the ‘inner workings of AI models’. The research is of note as all the major companies are deeply investing in scaling AI models. While doing their best to make the results more ‘interpretable’, reliable, and of course. Safe within defined guardrails. This continues to be a challenging task for the industry at large, with billions at stake as the companies roll out AI based search and services, in many cases before they’re as reliable and fool-proof as desired by mainstream users. Anthropic continues to put a high priority on these research and safety imperatives as they too race to roll out next gen versions of its Claude 3 models in various sizes, to go head to head with the latest from OpenAI, Google and others. More here.

Other AI Readings for weekend:

  1. Google’s Gemini AI hallucinations continue to attract media attention, as the company rolls out its AI powered Search to millions of users in the US. Meanwhile, Alphabet/Google CEO Sundar Pichai made a confident and balanced presentation of its Gemini AI driven strategy for the longer-term with critical questioning by Verge editor-in-chief Nilay Patel.

  2. OpenAI signs $250 million over five years AI content license deal with News Corp., another data point in an emerging trend.

Thanks for joining this Saturday with your beverage of choice. 

Up next, the Sunday ‘The Bigger Picture’ tomorrow. Stay tuned. 

(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)

Want the latest?

Sign up for Michael Parekh's Newsletter below:

Subscribe Here