The Machine Eats the Median. A Conversation on the End of Average Software Engineering.
Everyone is debating whether AI will replace software engineers. That is the wrong question. The right question is which software engineers — and why. The answer was understood decades ago, buried in the economics of talent markets. Almost nobody in technology has bothered to read it.
There exists a rather elegant body of work in microeconomics that explains why, in certain markets, small differences in ability produce vast differences in reward. The mechanism is not mysterious. Two forces conspire. First, consumers regard lesser talent as a poor substitute for greater talent — not a slightly worse alternative, but a categorically different one. Second, certain technologies of delivery allow a single producer to service an enormous market without proportional increases in cost. When these forces combine, the distribution of income stretches dramatically rightward. A handful capture nearly everything. The rest are left to contemplate what went wrong over increasingly affordable lunches.
Claude Code now writes between 70% and 90% of production code at its parent company. Spotify reports a 90% reduction in engineering time on certain workflows. Anthropic's own 2026 Agentic Coding Trends Report reveals that developers use AI in roughly 60% of their work, and that 27% of AI-assisted output consists of tasks that would never have been attempted otherwise. These are not productivity enhancements. They are technologies of joint consumption — one agent, infinite reach, near-zero marginal cost of replication. The economics that follow are as predictable as they are uncomfortable.
What follows is a dialogue. Alan Turing — resurrected for the purpose of thinking clearly about machines — meets an L10 Google Fellow: someone who has spent a quarter-century at the summit of industrial software engineering. They disagree on nearly everything except the conclusion.
I. The Convexity Problem
Turing: You appear troubled. For a Distinguished Engineer at the world's most profitable advertising concern, you carry a remarkably furrowed brow.
The Fellow: I watched a junior hire ship an entire microservice in forty minutes using an AI coding agent last Tuesday. The architecture was clean. Tests passed. It would have taken my team a sprint. I am not anxious for myself. I am anxious about what, precisely, I am meant to tell the two hundred engineers in my organisation.
Turing: Then you have already intuited something that most of your colleagues have not. When the technology of delivery permits one producer to serve an arbitrarily large market, small differences in quality cease to produce small differences in reward. The relationship between talent and income becomes convex. Not linear. Not even quadratic in the interesting cases — cubic. The gap between good and great does not merely widen. It detonates.
The Fellow: And the gap between median and good?
Turing: Collapses to irrelevance. Consider the mechanics. Before these instruments arrived, a median engineer's output was bounded by typing speed, context-switching costs, and the sheer friction of translating intent into syntax. Those constraints created a floor beneath the median. An engineer who was twenty percent less talented than the best might produce only forty percent less output, because the bottleneck was mechanical rather than cognitive. The bottleneck was, in effect, a subsidy to mediocrity.
The Fellow: And now the bottleneck has evaporated.
Turing: Worse. It has inverted. When an AI agent handles implementation, the binding constraint becomes what one might call the quality index of the practitioner — the irreducible residue of judgment, taste, and architectural instinct that no tool can supply. What distinguishes engineers now is not how rapidly they produce code, but how precisely they think. Problem decomposition. System design. The faculty for specifying intent with sufficient rigour that a machine can execute it faithfully. And here, the distribution of talent is not merely unequal. It is grotesquely skewed.
II. The Imperfect Substitution Trap
The Fellow: Surely the optimists have a point. When you reduce the cost of producing software, you create demand for more of it. More projects become economically viable. The pie, as they say, grows.
Turing: The pie does grow. But the manner in which growing pies get divided is rather more instructive than the fact of their expansion. The critical insight concerns substitution. Lesser talent is a poor substitute for greater talent in any market where consumers can perceive quality differences. Hearing a succession of mediocre recitals does not, I am afraid, amount to a single outstanding performance. A surgeon who is ten percent more successful in saving lives does not command a ten percent premium. People will pay vastly more — because the alternative is not ninety percent as good. It is categorically worse.
The Fellow: Applied to software: a system designed by someone who genuinely understands distributed consensus is not incrementally superior to one designed by a competent generalist. It is a different artefact entirely.
Turing: Quite. And when AI handles the implementation layer, the design layer is all that remains for human judgment. One cannot substitute three average system designers for one exceptional one. The market simply will not accept it. Users do not experience "average distributed systems." They experience systems that function under load or systems that collapse. The output is binary in a manner that renders substitution even more imperfect than in the performing arts.
The Fellow: So the expanding pie creates more projects, but the concentration of talent determines who captures the value from those projects.
Turing: More demand. Fewer winners. This is not a contradiction. It is precisely what one ought to expect when one simultaneously expands the market and collapses the cost of production.
III. Joint Consumption and the Death of the Artisan
The Fellow: Let me push back. Software engineering is not analogous to music. A concert violinist's performance can be broadcast to millions. Enterprise software requires bespoke customisation, domain knowledge, integration with legacy systems of truly heroic antiquity. One cannot broadcast a good architecture.
Turing: That was true when architecture was inseparable from implementation. But observe how these new instruments actually function. A single well-specified design pattern, a single well-articulated intent, can generate implementations across hundreds of different contexts. The costs of production do not rise in proportion to the size of the market served. The architect's insight becomes what one might call a quasi-public good — excludable, certainly, but with near-zero marginal cost of application. This is the precise mechanism by which superstars emerge in any field: the technology of delivery decouples effort from reach.
The Fellow: So the senior architect who once had to be physically present on every engagement — reviewing pull requests, mentoring juniors, sitting through interminable design reviews — can now encode their judgment into agents and let those agents operate at scale.
Recommended by LinkedIn
Turing: And this alters the economics entirely. Revenue accelerates with quality when the technology permits broad distribution. Previously, the architect's market was constrained by the diseconomy of personal contact — congestion, if you will. A brilliant architect could serve only so many teams before the quality of their attention degraded. AI agents eliminate that congestion. The personal market of the finest engineers becomes, in principle, boundless.
The Fellow: Which means the income distribution follows.
Turing: It is already following. Entry-level AI-native developers command $90,000 to $130,000. Traditional developers manage $65,000 to $85,000. At the summit, engineers who can design systems for autonomous agents are commanding packages that would have been considered vulgar five years ago. Meanwhile, credible voices suggest the very title "software engineer" may not survive the calendar year. Not because there is no work. Because the work has bifurcated into two activities that share almost nothing in common.
IV. What the Median Engineer Gets Wrong
The Fellow: Most engineers I know are responding in one of two ways. Either panic or denial. The panic camp is frantically updating their LinkedIn profiles to read "AI-augmented engineer." The denial camp insists, with touching conviction, that AI cannot possibly handle real-world complexity.
Turing: Both camps are committing the same error. They conceive of AI as an instrument that augments the existing distribution of work. It is nothing of the sort. It is a technology that transforms the distribution itself. The competitive model — products are undifferentiated, everyone benefits proportionally from productivity gains — simply does not apply. Software engineering has never been a commodity market. Its products are radically differentiated by quality, and consumers, whether end-users or enterprises, can perceive the difference with ruthless clarity. In such a market, any technology that expands the reach of the best concentrates reward at the top. This is not a bug. It is a mathematical inevitability.
The Fellow: So "learning to use AI tools" is necessary but wildly insufficient.
Turing: It is the equivalent of a mediocre violinist purchasing a finer instrument. The instrument matters, of course. But the market does not reward instruments. It rewards the quality of what emerges from them. The engineers who will flourish are not those who learn to prompt competently. They are those whose judgment, taste, and capacity for system-level reasoning were always exceptional — and who now possess a technology that strips away every constraint except talent itself.
V. The Uncomfortable Prescription
The Fellow: You are describing a future that is, by any reasonable measure, deeply unfair.
Turing: I am describing a future that is deeply consistent with how markets have behaved whenever technologies of joint consumption arrive. The wireless did this to musicians. Television did this to actors. The internet did this to journalists. Each time, the technology expanded the market, hollowed out the middle, and minted superstars. What Marshall observed in 1890 remains stubbornly true: new facilities for communication permit those who have attained a commanding position to extend their influence over a vastly wider area. AI is merely the latest instance — and, I rather suspect, the most aggressive.
The Fellow: So what does one tell a competent, experienced, mid-career software engineer?
Turing: Three things. First, cease optimising for breadth and begin optimising for depth. In a market that rewards the exceptional, the generalist is the first casualty. The market pays handsomely for the individual who is unmistakably the finest at something specific. It pays indifferently for the person who is reasonably capable at everything. Second, ascend the abstraction stack. The locus of value has migrated from writing code to specifying what code ought to accomplish and why. This is not prompt engineering — that ghastly phrase. It is systems thinking, domain mastery, and the discipline of decomposing ambiguous problems into precise specifications. Third, and most uncomfortably: accept that this transition is not gradual. These concentration effects emerge abruptly, once the relevant technology crosses a threshold. We have crossed it. The evidence is not equivocal.
The Fellow: And what does one tell the leaders who manage these engineers?
Turing: That headcount is now a liability masquerading as an asset. The six-person team comprising one exceptional architect and five AI agents will outperform the fifty-person department every time. Not because the fifty are incompetent — many are perfectly adequate — but because the production function has changed beneath their feet. In markets governed by these dynamics, the most talented operate in the largest markets and capture the greatest share. The converse is equally true: organisations that distribute work across many average performers are fighting the economics of their own industry. They are, to put it plainly, building orchestras of second violinists and wondering why nobody buys tickets.
VI. The Only Question That Matters
The Fellow: So what is the question everyone ought to be asking?
Turing: Not "Will AI replace software engineers?" That question presupposes a homogeneous profession, which is rather like asking whether electricity will replace craftsmen. The question is: Where do I sit on the talent distribution, and does this technology make the concentration of returns work for me or against me? If you occupy the right tail — if your judgment, your capacity to specify intent, your understanding of systems is genuinely distinguished — these tools represent the greatest amplifier in the history of the profession. Your market has just become infinite and your marginal cost of delivery has collapsed to zero. If you occupy the middle of the distribution, they represent an existential threat. Not because they can replicate what you do. Because they can replicate it well enough that the market no longer requires your services when a superstar plus an agent can satisfy the entire demand.
The Fellow: The extensive margin. When returns become sufficiently large for the best, the less talented are driven out altogether.
Turing: Correct. The machine does not eat everyone. The machine eats the median. And in a profession that has spent two decades pretending the median is quite good enough — sheltering behind process, Agile ceremonies, and the sheer friction of implementation — that is a reckoning long overdue.
One is reminded of what is sometimes said about the English weather: everyone discusses it, nobody does anything about it, and those who prepared for rain are the only ones who remain dry.
The economics of talent concentration are not new. The technology that triggers them is. Act accordingly.
I love this Biswa! It sounds like a start of a bumpy ride.
Biswa Sengupta PhD Excellent perspective on the economic shift. The real question is what work remains human. As code becomes abundant, value shifts to intent, judgment, and system behavior. Execution is cheap. Accountability is not
Really valuable framing, Biswa Sengupta PhD. The point that nearly every technical principle maps to an organisational concern — vendor strategy, cost governance, compliance readiness — is exactly right, and too often overlooked in the rush to ship agent demos. Leading AI/ML teams in financial services, I've seen firsthand that the PoC-to-production gap is fundamentally a trust and governance problem, not a technical one. Your work on codifying these design principles is genuinely useful. Very interested in the codebase — happy to share perspectives from the production side in return.
The superstar market framing is underused in these conversations. What I've seen in practice: the engineers who thrive aren't necessarily the best coders — they're the best systems thinkers. They can look at what AI produces, understand why it's wrong architecturally, and course-correct. That metacognitive layer doesn't come from prompt engineering courses. It comes from years of shipping things that worked on paper and failed in production. The median isn't being eaten because of skill level — it's being eaten because of replaceability of cognitive pattern. "I can write this function" is now commoditized. "I can see that this entire approach will break at scale" still requires deep experience. That's the real line.