No Image Available

AI's 'FDEs' go from Forward Deployed Engineers to Entities. ARD #68

The frame running through every item today: AI FDEs go from engineers to entities.

The Forward Deployed Engineer (FDE), and other Employees,— popularized at Palantir, then adopted across the AI-native frontier vendors — was the early manifestation of how AI gets into the enterprise.

What’s happening now is expanding and getting far bigger and mainstream for the enterprise:

OpenAI and Anthropic are scaling that pattern from individual headcount to institutional structures — joint ventures, partner alliances, multi-billion-dollar PE-backed entities. Same FDE acronym, two different referents. The enterprise distribution phase of the AI wave is here.

Three Key Takes today:

(1) OpenAI and Anthropic’s ‘Forward Deployment’ Entities. OpenAI just finalized a $10 billion joint venture with private equity firms — Goldman Sachs, Blackstone, others — to deploy AI in the enterprise. Anthropic is doing similar at scale. Both extend the Frontier Alliance Partnership OpenAI announced a few weeks ago with McKinsey, Bain, and major consulting firms. The reporting: Bloomberg — OpenAI finalizes $10B joint venture with PE firms to deploy AI. The FDE backdrop in tech and AI startups: Medium — The Rise of the Forward Deployed Engineer (FDE) in Tech and AI Startups. The OpenAI Frontier Alliance with major consulting partners: OpenAI — Frontier Alliance Partners. Standing thesis from the AI-RTZ archive: AI-RTZ #494 — AI Agents Arrive in the Enterprise.

MP Take: This highlights AI earnestly entering the distribution phase in the enterprise — like every tech wave past. It’ll take years to integrate AI — especially in its recent reasoning and agent forms — into legacy enterprises across vertical domains.

This has been true in technology waves for decades. AI, due to its new probabilistic foundations, requires more work than the deterministic software technologies of the past half century and more. So it’s a bigger lift. And so it will require a bigger effort by the AI vendors — which means more time and more money for results at scale.

The shift from individual FDEs to institutional Forward Deployment Entities is the natural response. OpenAI’s $10B PE-backed JV and the McKinsey-anchored Frontier Alliance are both signs that the enterprise distribution phase has officially begun.

(2) SAP / Salesforce Blocking ‘Unauthorized AI Agents’. The flip side of the forward-deployment push is showing up in incumbent defensive moves. SAP is the lead news this week — they’re blocking ‘unauthorized AI agents’ (OpenClaw is the specific lead) from accessing their enterprise software environments. Expect Salesforce, Microsoft, and the rest of the incumbent enterprise vendor cohort to follow with similar defensive motions. The reporting: The Information — SAP moves to block OpenClaw, ‘unauthorized AI agents’. Standing thesis on the incumbent-defensive pattern: AI-RTZ #847 — Microsoft and Salesforce Sweating.

MP Take: The issue of incentives and alignment with customers is a key characteristic of every tech wave. Legacy and incumbent vendors will of course resist new tech options and alternatives.

Expect this in force across the industry as AI Agent and other software/services emerge — especially from AI-native startups versus incumbents. SAP blocking OpenClaw is the first big public example, but it won’t be the last. The defensive motion runs in parallel with the forward-deployment push — incumbents protecting installed bases, while OpenAI and Anthropic build institutional bridges into those same accounts via the consulting and PE-JV channels. Both happen at once. Every tech wave does this.

(3) Anthropic Changing Pricing Ahead of Broader AI Agent Deployments. Anthropic is shifting enterprise billing from flat fees to per-token pricing — and the timing is not accidental. It comes ahead of the broader AI Agent deployment wave, and it lands while the leading vendors are AI-compute-supply-chain-constrained as they build data centers at unprecedented scale. Token-maxing for AI agents explodes compute costs hundreds of times. Uber’s CTO publicly complained their developers had blown up the AI agent budget by May. The reporting: Implicator AI — Anthropic shifts enterprise billing to per-token pricing; the flat-fee era is over. Standing thesis on AI Agent and Compute pricing moving from subscriptions to a-la-carte: AI-RTZ #1064 — Anthropic and OpenAI’s Velvet Rope.

MP Take: Forward deployment at scale is the ideal time to make needed pricing adjustments — especially at a time when the leading vendors, OpenAI and Anthropic in particular, are AI-compute-supply-chain-challenged as they build data centers at scale.

The forward-deployment-entity efforts help socialize and explain the pricing and performance imperatives to enterprise customers as they plan wider and deeper AI deployments. Similar pricing inculcation trends occurred in prior tech waves — from mainframes to client/server to cloud and others.

Per-token pricing is the right shape for an agent-driven world; flat fees were always the wrong shape for usage that scales with autonomous orchestration. The Forward Deployment Entity is the channel through which the pricing reset gets socialized — that’s why this is happening now.

Plus: Gadget AI — Blackberry QNX is still important in the AI era. The surprising story this week — Blackberry’s QNX software operating system, the embedded real-time OS quietly running in millions of cars, medical devices, and industrial systems, is still very much relevant in the AI era. The ‘SaaS apocalypse’ narrative gets the story backwards. Source: WSJ — Blackberry QNX software cars. Standing thesis on historical tech bargains: AI-RTZ — Apple Vision Pro: A Historical Bargain.

MP Take: AI success at scale is going to require lots of ‘hybrid’ tech that leverages existing deterministic software with all the new AI probabilistic software designed for reasoning and agents. Most will get re-used. Not only as software, but as data sources for AI systems going forward.

So current fears of a ‘SaaS apocalypse’ are overdone. Enterprise software processes today, and the data that incumbent companies are using to serve their customers, have big opportunities to get a ‘second life’ in AI systems going forward.

Examples are software ticket systems from companies like Linear and Atlassian that are working with OpenAI and Anthropic to integrate incumbent software developer support systems for AI Agents and related services. Blackberry QNX is the lower-profile version of the same story — decades-old deterministic embedded code, suddenly relevant again as the data and runtime substrate for AI-driven cars and devices. Hybrid is the shape of the AI era.

Bonus — today’s AI-RTZ companion #1076 covers Huawei’s AI gains in China over Nvidia despite chips two generations behind — and what that escalating dynamic means for US global interests as China’s home-market AI buildout accelerates ahead of the Xi-Trump tech-and-trade meeting later this month.

Closing Questions from ARD listeners and AI-RTZ readers —

  • What’s MP’s most useful part of his AI Agent subscriptions across Anthropic Claude Cowork, OpenAI Codex, Google Gemini Ultra, and Perplexity Computer Max?

    • After a lot of work, have found a few established daily processing steps that save 2-3 hours every day. That’s the daily, compounding leverage — repeatable workflows MP has built across the four agent surfaces that absorb mechanical work and free up attention for analysis and judgment. Standing reference from the AI-RTZ archive: AI-RTZ #1054 — Working out daily with AI and AI Agents.

  • What are the biggest issues using them daily?

    • Two persistent issues. First — the systems still require daily reminding for proactive suggestions on ways to move forward on various projects. They don’t volunteer. They don’t anticipate. The user has to keep prompting. Second — the ‘cold start’ problem is real. There are few guides on how to leverage these systems if one is not a coder or a developer. Everyday office and personal tasks require constant, proactive querying, curiosity, and experimentation to figure out what these tools can actually do for someone outside the developer mainstream. The mainstream on-ramp is still the missing piece. Standing thesis: AI-RTZ #1061 — Fragility of Today’s Claude Cowork. Stay tuned.


(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)


Short Clips from today’s episode

Short — AI Agents Still Need Daily Prodding The user has to do a lot of self-stock work. These systems are not proactive. The word is AI agents, but they’re not agents in terms of how they’re popularized — like an intern who comes in every day, eager beaver, suggesting things. Even the most expensive ones — Comet Max from Perplexity, Cowork from Anthropic — you have to prod them every day. The cold-start problem is real.

Short — AI’s Mainframe Stage: Where the Money Is We’re still in what I’m calling the mainframe stage of AI, not the consumer stage yet. On the mainframe side, just like the mainframe era in the 60s, client/server in the 80s, cloud in the 2000s — the enterprise is where the money is. As the famous saying goes, you go to rob the banks where the money is.

Short — AI Pricing: From Buffet to A La Carte Anthropic is shifting enterprise billing from flat fees to per-token pricing. Token-maxing for AI agents explodes compute costs hundreds of times. Uber’s CTO recently complained developers were already blowing the year’s AI budget by May. Forward deployment at scale is the ideal time to make pricing adjustments — same pattern as prior waves from mainframes to client/server to cloud.

Short — Old Software’s AI Second Life Blackberry’s QNX operating system — left for dead after the iPhone — is having a second life as the embedded OS quietly powering cars, trucks, infotainment systems, and increasingly AI agents. AI success at scale requires hybrid tech — existing deterministic software with all the new probabilistic AI software for reasoning and agents. SaaS apocalypse fears are overdone. Hybrid is the shape of the AI era.


About AI Ramblings Daily (ARD), and AI-RTZ

Both are daily. Both are free. Both are about AI. But they’re different mediums carrying different messages.

AI-RTZ is the morning text — a deeper written take on one idea, published by at least 5 AM EST. Today: post #1076 — Huawei’s AI gains in China over Nvidia, despite chips two generations behind.

AI Ramblings Daily is the afternoon video + podcast — my ad hoc takes and perspective on the day’s AI issues & news flow, around 20 minutes, with short 1-2 minute clips for quick topic views. Today: episode #68.

Subscribe to either or both on michaelparekh.substack.com. They run as separate Sections you can opt into or out of.


Links used in today’s show (already embedded inline above; listed here for reference)

Take 1 — OpenAI + Anthropic’s Forward Deployment Entities:

Take 2 — SAP / Salesforce Blocks ‘Unauthorized AI Agents’:

Take 3 — Anthropic Per-Token Pricing:

Gadget AI — Blackberry QNX:

Closing Qs — MP’s AI Agent subscriptions:

Companion text:






Want the latest?

Sign up for Michael Parekh's Newsletter below:


Subscribe Here