A few days ago, I had an hours-long conversation with Daniel Kahneman. We talked about the human brain, intuition, emotional intelligence, and artificial intelligence. I compared the experience on LinkedIn to a music lover co-creating tunes with Taylor Swift. That analogy still holds. When you spend hours in deep dialogue with someone whose work has shaped how you think about thinking itself, the experience stays with you.
Ten Years of Kahneman’s Work
I first encountered Kahneman’s work about a decade ago. I read about the planning fallacy and recognized it instantly in every technology project I had ever managed. Deadlines that seemed reasonable at the outset. Budgets that felt solid. Teams that were confident. And yet, project after project, the same pattern of optimistic underestimation played out. Kahneman had a name for it, and more importantly, he had an explanation rooted in how our minds actually work.
His TED talk on the riddle of experience versus memory was another turning point. The idea that the experiencing self and the remembering self are fundamentally different, and that they often disagree about what constitutes a good life, changed how I think about product design. When we build digital products, are we optimizing for the experience in the moment or for how people will remember it? Those are different design problems with different solutions.
Then came Thinking, Fast and Slow. I wrote in 2017 that few books open our eyes by revealing truths hiding in plain sight, and Kahneman’s book is one of them. System 1, the fast, automatic, intuitive mode of thinking. System 2, the slow, deliberate, analytical mode. The framework is deceptively simple, but once you internalize it, you see its implications everywhere.
Chess, AI, and the Human Mind
Kahneman and I share an appreciation for chess as a lens for understanding cognition. I have written about what chess taught me about the danger of premature celebration and how the mind fails at critical moments. The Sanskrit verse vinaash kaalae vipreet buddhi (when destruction approaches, the mind fails first) captures something Kahneman has studied more rigorously than anyone: the systematic ways human judgment breaks down under pressure.
During our conversation, Kahneman talked about the beautiful chess moves made by AlphaZero, DeepMind’s chess-playing AI. What struck him was not just that AlphaZero plays at a superhuman level, but that it plays in a way that is aesthetically different from human chess. It sacrifices material for positional advantage in ways that no human grandmaster would consider. The machine has no emotional attachment to its pieces. It evaluates positions purely on their structural merit.
This is a profound observation about the difference between human and artificial intelligence. Human chess players develop intuition through thousands of games. That intuition is powerful but also constraining. A grandmaster “knows” that sacrificing a queen is almost always wrong, and that knowledge, encoded in System 1, prevents them from even considering positions where it might be right. AlphaZero has no such constraint. It evaluates every position from first principles, unconstrained by the heuristics that both empower and limit human experts.
The implication extends far beyond chess. In every domain where humans develop expert intuition, that intuition is both an asset and a blindspot. Experienced executives “know” which strategies work. Seasoned engineers “know” which architectures are sound. That knowledge is usually reliable. But it can also prevent us from seeing unconventional solutions that a fresh perspective, whether human or artificial, might find.
System 1, System 2, and Large Language Models
Current AI systems, particularly large language models, operate in a way that resembles System 1 thinking. They produce outputs rapidly based on pattern recognition and statistical associations. They are remarkably fluent. They can sound deeply confident. But they lack the deliberate, step-by-step reasoning that System 2 provides. They do not know when to slow down, when to question their own output, when to say “wait, let me think about this more carefully.”
Kahneman pointed out that this is not just a technical limitation. It is a structural one that mirrors the most common failure mode in human thinking. Humans default to System 1 when they should engage System 2. AI systems, as currently designed, are stuck in System 1 mode almost entirely. The question is whether we can build AI that knows when to shift gears. And whether humans interacting with AI will be disciplined enough to apply their own System 2 thinking to evaluate what the machine produces.
I think the answer to the first question is yes, but it will take time. Chain-of-thought prompting and reasoning frameworks are early steps in this direction, essentially forcing the model to slow down and show its work. The answer to the second question worries me more.
If AI outputs sound confident and fluent, and humans evaluate those outputs using their own pattern matching and gut feeling, the result is a dangerous feedback loop of shallow thinking validating shallow thinking.
Intuition: When to Trust It, When to Override It
We spent considerable time on intuition. Kahneman does not dismiss it. He argues that intuitive expertise is real and valuable, but only when it has been built through sustained practice in environments with regular, reliable feedback. A chess master’s intuition about a position is trustworthy because it was built through tens of thousands of games with clear outcomes. A doctor’s intuition about a diagnosis can be trustworthy for the same reasons. A stock picker’s intuition, operating in a noisy and unpredictable environment, usually is not.
This distinction is one of the most practically useful ideas in Kahneman’s work. In technology, some domains offer fast, reliable feedback: user behavior on a website, server performance metrics, A/B test results. Intuitions built in these domains tend to be sound. Other domains offer slow, noisy feedback: strategy, organizational design, market positioning. Intuitions here are less reliable, and the danger is that experienced leaders do not recognize the difference. They trust their gut equally in both contexts.
Emotional intelligence plays into this as well. Leaders often confuse confidence with competence, both in themselves and in the people they evaluate. System 1 is drawn to confidence. It feels right to follow the person who speaks with certainty. But Kahneman’s work shows that confidence and accuracy are poorly correlated. The leaders who understand this distinction build better teams and make better decisions. They learn to weigh evidence over charisma.
What I Carry Forward
I have been following Kahneman’s work for over ten years. Reading his papers, watching his talks, applying his frameworks to my own work in technology and leadership. Getting to spend hours in direct conversation with him, exploring how his ideas apply to the current moment in AI and organizational leadership, was something I will think about for a long time.
As AI becomes more powerful and more integrated into how we work, the lessons from behavioral economics become more important, not less. Understanding how humans think, where we go wrong, and why we go wrong is essential context for anyone building or deploying AI systems. The organizations that succeed with AI will not be the ones with the most compute or the biggest models. They will be the ones that understand the cognitive landscape on both sides of the human-machine interface.
Nobody has illuminated that landscape more clearly than Daniel Kahneman.

