AI: Weekly Summary. RTZ #1053
-
Anthropic’s new Mythos model, & Glasswing cybersecurity sheen: The NY Times unpacked Anthropic’s carefully choreographed pre-announcement of its next frontier model — code-named Mythos. It’s sheathed around Project Glasswing, aimed at ‘Securing Critical software for the AI era’. It’s an initiative with a range of partners to assure Mythos safety across industries. It’s already a topic of conversation in the government, with recent discussions by Commerce and the Fed with commercial banks. This is all ahead of what is shaping up to be a sprint of model launches through Q2 vs peers. The Anthropic strategy is now worth watching on its own terms. enterprise-safe, interpretability-forward, no ads ever, and a deliberate pace of capability releases that treats each frontier model as a commitment rather than a marketing moment. Especially as its revenues ramp in ARR to over $30 billion, close to lapping OpenAI. Mythos is the next load-bearing beam in that strategy, and the early enterprise teasers suggest a meaningful jump in reasoning, long-context reliability, and Claude Code / Cowork integration. More here.
-
OpenAI’s close and personal Business battle with Anthropic: OpenAI’s new ‘Spud’ AI models are also getting the cybersecurity collaboration with partners treatment like Anthropic’s Mythos/Glasswind discussed above. This as OpenAI continues to prep for its mega-IPO later this year, with updated 2030 forecasts leaked to the media. This as OpenAI fine-tunes its communications strategy with its ‘Tech Bro’ podcast network, TBPN acquisition for ‘low hundreds of millions. And presages over a $100 billion in ads revenue by 2030, vs $2.5 billion thus far in ‘small tests’. All ahead of a mega-2026 IPO, preferably ahead of arch-rival Anthropic. My take on that here in the, “Real OpenAI/Anthropic playbook”. Both OpenAI and Anthropic are running scarce token supply against exploding enterprise demand, and both are testing new “all-you-can-eat” tiers, new rate-limit ladders, and new “bring your own Compute” carveouts for their biggest customers. The pricing is getting intense because the customers are the same — Fortune 500 CIOs are running dual-vendor bakeoffs on the same workloads. This battle got more intense this week as Anthropic kicked off OpenAI’s OpenClaw from its Claude products via APIs. And introduc ed its own ‘Claude Managed Agents’ offering on a la carte pricing. More here.
-
Meta’s ‘Good Enough’ Muse Spark AI Model: Meta finally launched its long-expected post-Llama AI model. Dubbed ‘Muse Spark‘. The model is the first release out of its new MSL, Meta Superintelligence Lab unit, led by $15 billion acquihire Alexandr Wang. And it initially tests well with existing offerings by peers. Though it’s unclear how they’ll rank with the upgraded models from Anthropic and OpenAi in particular. Regardless, the models should be good enough to support the company’s Meta AI offerings across Instagram, Facebook, WhatApp and other Meta properties, serving over 3.5 billion users worldwide. The launch highlights three dimensions of founder/CEO Mark Zuckerberg’s strategy. First, it confirms Meta is willing to run a dual-track strategy — closed frontier and open-weights Llama — for the first time. Second, the benchmark story is genuinely competitive on reasoning and multimodal evals against GPT-5 and Claude Opus 4.6, which means the frontier is now a four-horse race (OpenAI, Anthropic, Meta, Google) with xAI a half-step behind. Third, Meta’s distribution advantage — 3+ billion users across Facebook, Instagram, WhatsApp, and Threads — means even a modestly-differentiated frontier model lands in more human hands than almost any lab besides Google can reach. The ‘Good Enough’ Meta approach should serve it well vs historical precedents in other tech waves. Meta’s real challenge going forward are not the AI Models. It is how to get billions of mainstream users to daily habit Meta AI at scale. More here.
-
Amazon’s AI fueled ‘CEO Letter’: Amazon CEO Andy Jassy explained Amazon’s $200+ billion AI investments, in his annual letter to shareholders It’s of course bigger than its peers, and is focused especially on Trainium, its own AI chips vs Google’s TPUs and Nvidia’s GPUs. The scope of the investments are notable, especially for the various points of emphasis. Jassy also provides updates on its Leo satellite initiative (formerly Project Kuiper), set up to be a distant #2 to Elon Musk’s SpaceX/Starlink network. Leo is set to launch commercially mid-year. Jassy has been shaking things up at AWS for a while now to sharpen AI initiatives. The letter is Jassy’s fifth installment since picking up the CEO role from founder Jeff Bezos in 2021. Jassy acknowledges that FCF for the company declined from $38 billion to $11 billion in 2025, driven in part by an almost $51 billion ramp in capex. Again, “No Expense Spared” AI spending of course. Core emphasis here is the emphasis on the AI commitment. Just like its peers. More here.
-
New Wall Street Data addresses AI Job Fears: New reports show updated Wall Street firms Goldman Sachs and Morgan Stanley are responding with data, on the bleak fears of the “AI is taking jobs“ narrative that has dominated the AI landscape for a while now. Overall, directionally more job creation than not in modest numbers. The short version: the latest BLS, LinkedIn, and Indeed numbers show AI-adjacent job creation accelerating faster than AI-adjacent job destruction, at least at aggregate level. Software engineering roles are bifurcating (senior roles stable or growing, junior roles under pressure), while adjacent categories — implementation, evaluation, safety, RLHF, solutions engineering, and the whole layer of “AI plumbers” wiring LLMs into enterprise workflows — are adding headcount fast. However, aggregate data does show very real displacement at specific role-level (customer support, junior legal, junior marketing, entry-level analyst). But the overall “AI Jobs and Software Apocalypse” framing remains, for now, a story the data does not yet tell. More here.
Other AI Readings for weekend:
-
Google Deepmind launches Gemma 4, a local AI with notable capabilities as a Small Language Model (SLM). More here.
-
The new AI Ramblings Daily (ARD) series started. Topical takes on AI events and Issues. More here with Episode 50 and Episode 51 here.
(Additional Note: For more weekend AI listening, here’s the latest AI Ramblings Episode 49 on topical items. This week, a deep dive on Mega AI IPOs being force-fed into Passive Index Funds. “We are the Geese’ style. And More):
Up next, the Sunday ‘The Bigger Picture’ tomorrow. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)