Move 78: When human creativity beat the algorithm
Everyone remembers AlphaGo defeating Lee Sedol in March 2016, a moment that felt like watching human mastery yield to machine superiority.
The scoreboard told a clear story: artificial intelligence won decisively, four games to one, in a contest that seemed to settle the question of whether human intuition could compete against algorithmic optimization. But the scoreboard missed the most important game, and the single move that revealed something more interesting than machine dominance.
Game Four, Move 78. Lee Sedol played a move that professional commentators initially dismissed as a mistake, the kind of desperate gambit that appears when someone is losing badly and has nothing left to risk. AlphaGo had been winning with machine precision, its probability calculations suggesting a seventy percent chance of victory. Then Lee played Move 78, and something unprecedented happened. The algorithm's confidence collapsed. Win probability plummeted from seventy percent to nearly zero within seconds. The machine that had processed millions of games and calculated optimal strategies for every conceivable position encountered something it could not process: human creativity operating outside the patterns it had learned.
Lee won that game. Not by out-calculating the algorithm. Not by processing positions faster or evaluating more possibilities. He won by thinking in ways the machine could not anticipate, by seeing a pattern that existed nowhere in the training data, by creating rather than optimizing. The match still belonged to AlphaGo, but Game Four belonged to something the algorithm could not replicate. It belonged to the distinctly human capacity to generate novelty that no amount of pattern recognition can predict.
This distinction matters more now than it did in 2016, because the question facing every professional working with artificial intelligence is not whether machines can outperform humans at specific tasks. They demonstrably can. The question is what remains uniquely valuable about human contribution when algorithms handle optimization better than we ever will. Move 78 provides the answer, not through theory but through demonstration of what human thinking contributes that computation cannot.
What the Algorithm Could Not See
AlphaGo learned Go by processing millions of historical games and playing millions more against itself, identifying patterns that correlated with winning positions and optimizing its play based on those patterns. This approach works extraordinarily well for mastering games with well-defined rules and evaluating positions based on historical precedent.
The algorithm became superhuman at recognizing which patterns from its training predicted success and selecting moves that maximized win probability based on those patterns.
Move 78 existed outside this framework entirely. The position Lee created had never appeared in professional play. The pattern he exploited was not in AlphaGo's training data because no human had discovered it before that moment. The algorithm could not evaluate the move using historical precedent because no precedent existed. It faced something genuinely novel, and its response revealed a fundamental limitation of machine learning systems regardless of their sophistication.
When AlphaGo encountered a position outside its learned patterns, it could only respond by attempting to map the new position onto patterns it recognized from training. The move looked suboptimal according to conventional Go wisdom, so the algorithm initially evaluated it as unlikely to succeed. But as the game progressed and Lee's strategy unfolded, AlphaGo's calculations revealed that the position was not merely unusual but actively winning for Lee. The algorithm's win probability collapsed not because it made computational errors but because the position fell outside the conceptual space it had learned to navigate.
This is not a failure of AlphaGo specifically but a characteristic of how machine learning systems function generally. They optimize within learned parameter spaces. They identify patterns in training data and generalize those patterns to new situations that resemble what they have seen before. They excel at this optimization, often superhuman at finding the best solution within known frameworks. What they cannot do, by design, is recognize that the framework itself might be incomplete or that solutions might exist outside the patterns they learned during training.
Human creativity operates differently. We do not merely optimize within known patterns but generate new patterns by combining concepts in ways that have never been combined before, by seeing connections that do not exist in any dataset, by intuiting possibilities that cannot be derived from historical precedent. Lee saw something in the position that had never been articulated in Go theory and would not appear in any training corpus. He created new strategy rather than optimizing existing strategy, and that creation was invisible to the algorithm until the consequences became computationally undeniable.
Humans create. We define new problem spaces. We identify objectives that were not previously articulated. We generate approaches that do not exist in any training corpus because they emerge from combining concepts in novel ways or seeing possibilities that have never been documented. This capacity for genuine novelty, for thinking outside frameworks rather than optimizing within them, represents the irreducible human contribution in environments increasingly dominated by algorithmic capability.
This distinction has immediate practical implications for how we work with artificial intelligence rather than compete against it. The mistake many organizations make is attempting to use humans and AI for the same tasks, treating them as substitutes where the algorithm will eventually outperform the human through superior computation. The correct approach treats them as complements where each contributes what the other cannot.
AI excels at pattern recognition across datasets too large for human analysis. It identifies correlations humans would never detect. It optimizes complex systems with thousands of variables far beyond human capacity to evaluate simultaneously. It processes information at speeds and scales that make human computation look trivial. These capabilities are transformative and will continue expanding into domains currently considered human territory.
Humans excel at recognizing when the patterns are incomplete. We identify when optimization within current frameworks produces diminishing returns and different frameworks might be needed. We generate hypotheses that do not derive from existing data. We see connections between domains that appear unrelated in training corpora. We ask questions that have never been asked and pursue answers in directions that optimization would not suggest because the historical data provides no support.
The effective collaboration involves AI handling the optimization humans cannot perform while humans handle the creation AI cannot generate. The algorithm processes data, identifies patterns, calculates probabilities, and recommends actions based on learned correlations. The human evaluates whether the patterns are complete, whether the problem is correctly framed, whether the optimization is addressing the right objective, and whether opportunities exist outside the algorithmic search space.
What this means when working with AI
Lee Sedol's victory in Game Four demonstrates this collaboration in practice. He did not beat AlphaGo by processing positions faster or evaluating more possibilities or calculating probabilities more accurately. He beat it by seeing something the algorithm could not, by creating a strategic pattern that existed nowhere in the training data, by thinking outside the framework that optimization operates within.
This approach generalizes beyond Go to any domain where artificial intelligence is deployed. The algorithm handles the computational heavy lifting that humans cannot perform at scale. It processes customer data to identify purchasing patterns. It analyzes market trends to predict price movements. It evaluates resumes to surface qualified candidates. It optimizes supply chains to reduce costs. These are tasks where pattern recognition and optimization at scale create enormous value, and humans attempting the same work manually would be both slower and less accurate.
For professionals navigating this environment, the strategic question is not how to compete with AI at tasks the algorithm performs better but how to maintain and develop the distinctly human capabilities that algorithms cannot replicate. This means cultivating the ability to think outside learned frameworks, to generate novel combinations of existing concepts, to identify when optimization reaches diminishing returns, and to recognize opportunities in spaces the algorithm does not search because training data provides no support.
Recommended by LinkedIn
The capabilities algorithms cannot learn
The characteristics that made Move 78 possible reveal what human contribution looks like in practice. Lee did not optimize. He created. He did not calculate probabilities within known patterns. He recognized that a pattern not in the corpus could exist and be valuable. He did not rely on what Go theory said should work. He tested what might work despite theory suggesting otherwise.
These capabilities emerge from how human cognition operates differently from machine learning. We form abstract concepts that apply across domains even when the surface features look completely different. We combine ideas from unrelated fields to generate approaches that would not appear in domain-specific training data. We pursue directions that current evidence does not support because intuition suggests possibilities the data has not captured. We change our minds about what success means partway through problem-solving when the initial framing proves inadequate.
Machine learning systems do none of these things by design. They optimize within the conceptual space defined by training data and reward functions. They identify correlations in that data and apply those correlations to new situations that resemble the training examples. They do this extraordinarily well, often better than humans can, but the process is fundamentally optimization not creation. The algorithm cannot decide that the problem was framed incorrectly or that success should be defined differently or that solutions might exist outside the training distribution. Those capabilities require human judgment operating on different principles than pattern matching and optimization.
This distinction creates specific strategic imperatives for professionals working with AI:
First, develop comfort operating in spaces where data provides little guidance and historical precedent does not exist. When Aeroporti di Roma launched its corporate incubator, there was no playbook for running startup scouting inside a regulated airport infrastructure company. No training data, no comparable case in Italy. The value was precisely in the willingness to define the problem before anyone else had, to run the first experiment rather than optimize an existing one. Algorithms excel when patterns are abundant. Humans add value at the frontier, before patterns exist.
Second, maintain breadth of knowledge across domains that appear unrelated to your primary expertise. When Italo entered the Italian rail market in 2012 as the first private high-speed operator, it had no legacy systems, no established audience, and no historical data to optimize against. Building its digital presence from zero meant importing mechanics from e-commerce, social media and mobile that the rail industry had never applied. The more diverse your knowledge base, the more connections you can generate that no domain-specific algorithm would ever surface.
Third, develop judgment about when optimization reaches diminishing returns and a different framework is needed. At Cassa Depositi e Prestiti, Italy's national promotional institution, the challenge was never improving existing digital channels they barely existed. The organisation was optimizing communication built for a pre-digital audience, refining instruments that the people it needed to reach had already stopped using. Recognizing that the entire framework needed replacement, not refinement, was the prerequisite for everything that followed: a full digital rebuild, a CRM built from scratch, a €1.5 billion bond campaign designed for retail investors who expected the same experience they had on any consumer platform. Algorithms will optimize indefinitely within whatever framework you give them. The human job is to recognize when the framework itself is the constraint, not the execution within it.
Fourth, cultivate willingness to pursue directions that current evidence does not support. At B&B Hotels, the data available when I joined pointed toward optimizing existing acquisition channels the metrics were clear, the levers were known, the incrementalism was defensible. The hypothesis worth pursuing was different: that a mid-scale hotel chain expanding across three markets simultaneously needed to rebuild its commercial model around direct digital relationships rather than incrementally improve what was already there. That hypothesis had limited empirical support in the segment at the time. Machine learning needs data to make recommendations. Humans can bet on possibilities before the data exists, and those bets, when right, become the training data for everything that follows.
The irreplaceable human contribution
Lee Sedol retired from professional Go in 2019, citing his inability to compete with AI systems that had surpassed human capability. His retirement felt like acknowledgment that the era of human mastery in Go had ended, that machines had claimed territory that would not be recovered. But his retirement statement included something more interesting than concession to algorithmic superiority. He noted that AlphaGo taught him about gaps in his understanding, revealed strategic possibilities he had not considered, and expanded his conception of what the game allowed.
The machine made him better not by replacing him but by showing him patterns he could not see alone. The algorithm's strength in optimization revealed opportunities for human creativity that had not been explored. The collaboration between human intuition and machine calculation produced insights neither could generate independently. This is the model for working with AI that Move 78 demonstrates: not competition between human and machine but recognition that each contributes capabilities the other lacks.
We live in an environment where algorithmic optimization will continue expanding into domains currently considered human territory. Pattern recognition, prediction, classification, and optimization at scale will increasingly be handled by systems that perform these tasks better than humans can. Fighting this transition by attempting to out-optimize algorithms is futile. The computational advantages are overwhelming and growing.
What remains is the distinctly human contribution that algorithms cannot replicate regardless of how sophisticated they become. The ability to see patterns that do not exist in training data. The capacity to generate genuinely novel combinations rather than optimizing existing patterns. The judgment to recognize when frameworks need revision rather than optimization. The creativity to think outside the conceptual space that learning systems operate within.
Move 78 was not merely a good move in Go. It was demonstration that human creativity operates in spaces algorithmic optimization cannot reach, and that this creativity remains valuable precisely because it generates what the algorithm cannot.
The question for professionals is not whether AI will replace human contribution but how to maintain and develop the capabilities that make human thinking irreplaceable.
The algorithm optimizes, humans discover. That distinction matters more now than it ever has.
Giulio Ranucci
References
Strategic insights on technology, human capability, and working with AI. Delivered bi-weekly.
📧 Newsletter | 🎧 Podcast
See you in two weeks.