Estimated Reading Time: 5 minutes
Today’s AI knows a lot about the world. It just hasn’t learned how to stand in it yet.
We stand at a rare moment when artificial intelligence can be meaningfully applied to enterprises that power millions of mission critical services across a broad spectrum of industries. As we close out 2025, an undeniably exciting year for intelligence, we have seen rapid progress in usefulness, adoption, and scale. Experts from every corner of the field have debated capabilities, limits, and trajectories, each from their own vantage point.
Yet, despite all this momentum, something still feels missing.
Today’s AI often lacks the inner fluid (like the one in our ears 👂), the sense making substrate that gives true awareness of space, context, and consequence. There is still much to be done, enabling continuous learning, rethinking intelligence architectures, and drawing inspiration from deeper biological ideas, including the Thousand Brains theory pioneered by Jeff Hawkins and his collaborators. These directions may not produce instant breakthroughs, but they point toward a slower, more profound shift, one that could fundamentally reshape how intelligence is built and applied.
What follows are a few key insights and reflections I’ve gathered over the past few days, as I’ve revisited these ideas and conversations with a fresh perspective.
In the summer and autumn of 2025, three of the most influential minds in artificial intelligence offered long, candid conversations about what intelligence is and isn’t in today’s machines.
- Andrej Karpathy, former OpenAI research lead and AI educator, is now the founder of Eureka Labs and has spent decades building and teaching deep learning systems. He combines technical insight with a practical sense of how AI’s evolution is unfolding.
- Ilya Sutskever, co-founder and former Chief Scientist of OpenAI and now head of Safe Superintelligence Inc., helped architect the deep learning advances that produced today’s large language models (LLMs). His perspective is as much about the mathematical and strategic limits of current paradigms as it is about the future.
- Richard Sutton, a Turing Award winning pioneer of reinforcement learning (RL) and a longtime theorist of the “bitter lesson” in AI, brings a foundational, philosophical lens to the question of intelligence itself rooted in goals, experience, and consequences.
Each of them tackles the core puzzle, What does it mean for an AI to be intelligent? Their answers, at times united, at times sharply divergent, illuminate the frontier of research and reshape how we think about machines that can learn rather than merely predict.
Karpathy: Ghosts Before Agents, The Path to Real World AI
Karpathy begins with a striking metaphor, today’s AI doesn’t build animals, it summons ghosts or spirits.
What does he mean?
- Modern large language models are fantastic at absorbing patterns, the statistical shadows of human communication from massive text and code. They echo human choices, mimic structures, and can generate remarkable outputs.
- But this brilliance hides a fundamental limitation, they aren’t inherently tied to consequences in the world. They excel at patterns, yet they lack the embodied grounding that creatures, including humans, use to anchor intelligence. In Karpathy’s words, this process yields ghostly digital minds rather than adaptive, oriented agents.
Key Gaps Karpathy Highlights
- No Continual Learning: Current AI doesn’t reliably retain new information the way humans do. Tell it something once, and it may forget it the next session.
- Poor World Interaction: AI systems don’t yet use external tools, manipulate environments, or persistently test their understanding.
- Reliance on Memorization: LLMs can memorize enormous volumes of data, but this may actually hamper deeper abstraction.
This may explain why, despite incredible hype and rapid progress, Karpathy predicts true AI agents, systems that can operate autonomously with reliable competence across tasks, are still some miles away.
Sutskever: Beyond Scaling, A New Age of Research
Ilya Sutskever, architect of many of the breakthroughs underpinning current deep learning systems, offers a strategic pivot from nostalgia for simple scaling toward a new frontier of ideas.
For years, the prevailing wisdom in AI, often summarized as “just scale it up”, held that more data, more computation, and bigger models would continue to drive progress. Sutskever points out that this has been fruitful so far, but it’s reaching a point of diminishing returns, we already have enormous datasets and colossal compute resources, yet models still generalize much worse than humans.
Where does that leave the field?
- Not at a dead end, compute and scale still matter.
- But at a crossroads, the next breakthroughs will not come from scale alone. Instead, they will come from new ideas, new training paradigms, and fresh theoretical innovations.
Sutskever effectively declares a transition from the age of scaling back into the age of research, but now armed with unprecedented computing power.
Sutton: Intelligence as Goals and Experience
Richard Sutton takes a more radical step back and asks, What is intelligence, fundamentally?
His answer reframes the conversation, Intelligence is not fluent text or benchmark scores, it is about goals, actions, consequences, and learning from experience.
Prediction ≠ Understanding
LLMs are extraordinary at predicting what humans might say next, based on patterns in text. Sutton challenges the assumption that this is a world model in any meaningful sense.
- A world model should enable a system to predict how events unfold when it takes actions in an environment.
- LLMs predict human outputs, not the outcomes of actions in a real or simulated world.
- Therefore, next token prediction is not the kind of intelligence Sutton has in mind.
For Sutton, learning is grounded in experience with consequences. That means,
- Agents act, observe outcomes, and adjust their behavior to improve future outcomes.
- This loop — action → consequence → learning is the core of reinforcement learning and, to Sutton, the essence of intelligence.
He pushes back against the idea that LLM patterns could serve as reliable priors for true learning, a prior implies some ground truth to converge toward. But if the training objective is simply to echo human text, there’s no inherent “truth” that connects directly to physical outcomes.
Converging Toward the Future: The Triangulated View
While Karpathy, Sutskever, and Sutton arrive at the question from different angles, there’s surprising alignment on several key points,
Prediction Alone Isn’t Enough
All three recognize a gap between remarkable performance on benchmarks and robust, generalizable intelligence.
- Karpathy calls today’s systems ghosts, impressive simulations of behavior.
- Sutskever sees models that still struggle to generalize the way humans do.
- Sutton points out that imitation without action doesn’t constitute true understanding.
Learning from Experience Matters
Whether framed as agentic action (Sutton), as new research paradigms (Sutskever), or as a required capability still missing from LLMs (Karpathy), the theme of learning through interaction, feedback, and adaptation is central to all three.
Intelligence Is a Journey, Not a Benchmark
The debate is not about whether today’s AI is useful, it clearly is. Instead, it’s about whether we’re any closer to machines that genuinely understand, act with purpose, and adapt their knowledge through experience.
In the words of these thinkers, intelligence is not a mirror of human language, but a dance of prediction, action, and consequence, and the next steps toward artificial intelligence will be shaped by how we teach machines to truly live in and learn from their environments.
Happy New Year, AI, may you find your balance in 2026.
Inspired by conversations hosted by Dwarkesh Patel.