
AI: Waiting for AI 'God-like' AGI. RTZ #879
The Bigger Picture, Sunday October 20, 2025
As we experience 2025’s accelerating AI Infrastructure ramp, and the non-stop ‘circular deals’ boom, we get another timely pragmatic reminder that all this may take a bit more time. That the AI AGI promises may even take a lot longer vs our current ‘AI Bubble’ driven market expectations.
Despite all the implicit optimism on AI AGI/Superintelligence around the corner in this AI Tech Wave, we get another thoughtful reminder from Andrej Karpathy, of AI ‘vibe-coding’ fame, that it all may take a decade or more. For AI to really do its ‘God-like’ things we’re all excited about. Something I’ve been stressing for almost two years now, here at AI-RTZ despite my long-term optimism. And that is the Bigger Picture I’d like to focus on this Sunday.
Last year, one of the core ‘AI Godfathers’, Yann LeCun of Meta AI Research, made an articulate case that “large language model (LLM) AIs will not reach human intelligence”. You can hear it in his own words in this YouTube video to get the technical timelines.
This week we saw similar sober timelines from Andrej Karpathy, former head of AI self driving at Tesla, co-founder of OpenAI, and one of the deep AI technical educators of this AI era. In a two plus hour video interview with podcaster Dwarkesh Patel, Andrej lays down the bottom up pragmatic reasons why AGI is a decade away. And that current work on LLM AIs, AI Agents and AI Reasoning has a LOT of work ahead to be done before really fruitful results.
It’s laid out well in this transcript of the discussion from the beginning:
“Andrej explains why reinforcement learning is terrible (but everything else is much worse), why model collapse prevents LLMs from learning the way humans do, why AGI will just blend into the previous ~2.5 centuries of 2% GDP growth, why self driving took so long to crack, and what he sees as the future of education.”
“Watch on YouTube; listen on Apple Podcasts or Spotify.”
It kicks off right into the crux of it:
“Dwarkesh Patel 00:00:58
“What do you think will take a decade to accomplish? What are the bottlenecks?”
“Andrej Karpathy 00:01:02”
“Actually making it work. When you’re talking about an agent, or what the labs have in mind and maybe what I have in mind as well, you should think of it almost like an employee or an intern that you would hire to work with you. For example, you work with some employees here. When would you prefer to have an agent like Claude or Codex do that work?”
“Currently, of course they can’t. What would it take for them to be able to do that? Why don’t you do it today? The reason you don’t do it today is because they just don’t work. They don’t have enough intelligence, they’re not multimodal enough, they can’t do computer use and all this stuff.”
“They don’t do a lot of the things you’ve alluded to earlier. They don’t have continual learning. You can’t just tell them something and they’ll remember it. They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.”
He goes on:
“This is where you get into a bit of my own intuition, and doing a bit of an extrapolation with respect to my own experience in the field. I’ve been in AI for almost two decades. It’s going to be 15 years or so, not that long. You had [AI Guru] Richard Sutton here, who was around for much longer. I do have about 15 years of experience of people making predictions, of seeing how they turned out. Also I was in the industry for a while, I was in research, and I’ve worked in the industry for a while. I have a general intuition that I have left from that.”
“I feel like the problems are tractable, they’re surmountable, but they’re still difficult. If I just average it out, it just feels like a decade to me.”
The whole discussion is worth a watch and listen, even through the deeper technical bits that may be of less interest of general observers. Here are the time stamps to jump around if needed:
“Timestamps
(00:00:00) – AGI is still a decade away
(00:29:45) – LLM cognitive deficits
(00:49:38) – How do humans learn?
(01:06:25) – AGI will blend into 2% GDP growth
(01:32:50) – Evolution of intelligence & culture
The key here is the conviction on the timelines that come from a deep AI practitioner. The practical technical understanding of how much even our foremost AI Researchers don’t know, and how much work that is really left to do from a bottoms up perspective.
It’s useful to take it all in. And if you have more interest, go throug the Yann LeCun arguments on LLM AIs being a dead end as well.
Both these discussions will prove rewarding, and provide a fuller set of context around this AI Tech Wave. And a better sense of the Bigger Picture for AI timelines ahead.
While we figure out how long the current AI Musical Chairs, and ‘Boomerangs’ game continues in the short term.
As Axios puts it well at the end in “AI industry’s exponential faith throws a curve”:
“The AI industry has learned the “bitter lesson” that it should stop trying to teach machines a structured set of facts and instead just build ever-bigger machines that know how to learn.”
“But if it continues to focus on growth for its own sake and fails to steer its growth in a purposeful way, it could have more bitter lessons in store.”
“The bottom line: The trillion-dollar choice AI asks each of us to make is, are we willing to believe that “this time is different” — or do we stick with “what goes up must come down.”
I’ve long maintained that my long term enthusiasm for this AI Tech Wave is based mostly on the bottom up evolution of the AI technologies, as they’re developed and deployed at scale.
To be sure, AI will allow us to do A LOT of cool things between now and then. But the bigger AGI AI expectations in the near term may be premature.
In the meantime, the financial ebbs and flows are a separate set of events that should not be confused with the first set. And that is easier said that done in the early, exciting days of any tech wave. Stay tuned.
(NOTE: The discussions here are for information purposes only, and not meant as investment advice at any time. Thanks for joining us here)