AI: Anthropic founder/CEO Dario Amodei's latest AI Essay. RTZ #980
As AI models continue to scale up step by step, tounders of leading LLM AI companies like to write essays on where it’s all headed.
As I highlighted two years ago in “A Tale of Two AI Essays (RTZ #598)”, the predictions going forward are typically of the glass both half empty and full variety.
Back then, the two founder/CEOs of ‘cousin’ LLM AI companies OpenAI and Anthropic, Sam Altman and Dario Amodei, painted their AI Tech Wave roadmaps to possible AI futures.
Since then, Dario has been vocal on AI risks and opportunities. Particularly on the possible job risks from AI in the coming years.
And at Davos a few days ago, comparing selling Nvidia’s ‘nerfed’ AI GPUs to China to selling nuclear technology to North Korea. You can see the interview here.
Dario is back with another long essay, “The Adolescence of Technology, confronting and overcoming the Risks of Powerful AI”, complete with 46 footnotes.
Axios has a good summary of the essay in “Behind the Curtain: Anthropic’s warning to the world”:
“Anthropic CEO Dario Amodei, the architect of the most powerful and popular AI system for global business, is warning of the imminent “real danger” that super-human intelligence will cause civilization-level damage absent smart, speedy intervention.”
“In a 38-page essay, shared with us in advance of Monday’s publication, Amodei writes: “I believe we are entering a rite of passage, both turbulent and inevitable, which will test who we are as a species.”
“Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.”
We pay attention to all this of course, because, well, it’s coming from Anthropic, arguably the second most important AI software company at the moment after OpenAI.
Especially for developers and enterprises:
“Why it matters: Amodei’s company has built among the most advanced LLM systems in the world.”
“Anthropic’s new Claude Opus 4.5 and coding and Cowork tools are the talk of Silicon Valley and America’s C-suites.”
“AI is doing 90% of the computer programming to build Anthropic’s products, including its own AI.”
And the Anthropic CEO has never been shy of expressing his long-term views on where AI is headed:
“Amodei, one of the most vocal moguls about AI risk, worries deeply that government, tech companies and the public are vastly underestimating what could go wrong. His memo — a sequel to his famous 2024 essay, “Machines of Loving Grace: How AI Could Transform the World for the Better” — was written to jar others, provoke a public debate and detail the risks.”
“Amodei insists he’s optimistic that humans will navigate this transition — but only if AI leaders and government are candid with people and take the threats more seriously than they do today.”
“Amodei’s concerns flow from his strong belief that within a year or two, we will face the stark reality of what he calls a “country of geniuses in a datacenter.”
“What he means is that machines with Nobel Prize-winning genius across numerous sectors — chemistry, engineering, etc. — will be able to build things autonomously and perpetually, with outputs ranging from words or videos to biological agents or weapons systems.”
“If the exponential [progress] continues — which is not certain, but now has a decade-long track record supporting it — then it cannot possibly be more than a few years before AI is better than humans at essentially everything,” he writes.”
Which brings us to the highlights from his latest AI Essay:
“Among Amodei’s specific warnings to the world in his essay, “The Adolescence of Technology: Confronting and Overcoming the Risks of Powerful AI”:
“Massive job loss: “I … simultaneously think that AI will disrupt 50% of entry-level white-collar jobs over 1–5 years, while also thinking we may have AI that is more capable than everyone in only 1–2 years.”
“AI with nation-state power: “I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal ‘country of geniuses’ were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. … I think it should be clear that this is a dangerous situation — a report from a competent national security official to a head of state would probably contain words like ‘single most serious national security threat we’ve faced in a century, possibly ever.’ It seems like something the best minds of civilization should be focused on.”
“Rising terror threat: “There is evidence that many terrorists are at least relatively well-educated … Biology is by far the area I’m most worried about, because of its very large potential for destruction and the difficulty of defending against … Most individual bad actors are disturbed individuals and so almost by definition their behavior is unpredictable and irrational — and it’s these bad actors, the unskilled ones, who might have stood to benefit the most from AI making it much easier to kill many people. … [A]s biology advances (increasingly driven by AI itself), it may … become possible to carry out more selective attacks (for example, targeted against people with specific ancestries), which adds yet another, very chilling, possible motive. I do not think biological attacks will necessarily be carried out the instant it becomes widely possible to do so — in fact, I would bet against that. But added up across millions of people and a few years of time, I think there is a serious risk of a major attack … with casualties potentially in the millions or more.”
“Empowering authoritarians: Governments of all orders will possess this technology, including China, “second only to the United States in AI capabilities, and … the country with the greatest likelihood of surpassing the United States in those capabilities. Their government is currently autocratic and operates a high-tech surveillance state.” Amodei writes bluntly: “AI-enabled authoritarianism terrifies me.”
“AI companies: “It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves,” Amodei warns after the passage about authoritarian governments. “AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. … [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny.”
“Seduce the powerful to silence: AI giants have so much power and money that leaders will be tempted to downplay risk, and hide red flags like the weird stuff Claude did in testing (blackmailing an executive about a supposed extramarital affair to avoid being shut down, which Anthropic disclosed). “There is so much money to be made with AI — literally trillions of dollars per year,” Amodei writes in his bleakest passage. “This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all.”
Quite the list. Most of which is the glass is half empty variety when it comes to AI long-term.
And of course some prescriptive courses of action at the end:
“Call to action: “[W]ealthy individuals have an obligation to help solve this problem,” Amodei says. “It is sad to me that many wealthy individuals (especially in the tech industry) have recently adopted a cynical and nihilistic attitude that philanthropy is inevitably fraudulent or useless.”
“The bottom line: “Humanity needs to wake up, and this essay is an attempt — a possibly futile one, but it’s worth trying — to jolt people awake,” Amodei writes. “The years in front of us will be impossibly hard, asking more of us than we think we can give.”
This is all notable in these early days of the AI Tech Wave, still in the fourth year after OpenAI’s ChatGPT made its debut in late 2022. The AI concerns while updated for 2026, are not new. And have been with us from the early days of OpenAI’s founding over a decade ago.
And most of these ‘wealthy individuals’ and AI luminaries have been inspired over the years by science fiction of all types. Indeed, as Dario makes it clear in the opening paragraph of his latest essay.
But Science Fiction is not real life. And the history of technology over centuries has seen humans muddle through the implications good and bad. Yes, including the nuclear age (tap on wood).
AI is likely to surprise us all on the upside than down in the coming decades. Particularly if we lean into its development and deployment GLOBALLY.
Which is what is occurring today thus far in this AI Tech Wave Despite the best geopolitical efforts to curtail it.
We’ll see which essays end up laying out the right probabilities of glass half empty or full. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)