Software at the Speed of Thought: Implications of the AI Coding Revolution
Imagine walking into a meeting with an idea for a new tool your team needs. By the time the meeting ends, the tool exists. Not a mock-up or a plan. Working software, ready to use. This is happening in companies today.
Over the last 6 months I've become convinced that we're in the midst of a historic collapse in the price of software. Not a 20% or 30% cost reduction, but 90%+. A shift which will radically change the economics of the software industry and businesses at large. Not only will it unlock projects that previously were economically unviable. It'll also have profound consequences on the workforce.
Due to the importance of this topic, I will be exploring it across multiple newsletters. This first piece sets the scene, helping non-technical readers (and some technologists too) understand what's been happening. I've traced how AI coding is evolving through four eras and reveal why some companies are already capturing 40%+ productivity gains while others see nothing. Future newsletters will examine the strategic consequences for businesses, industries, and careers.
With the scene set, let's pull back the curtain, and begin.
The Last Time Everything Changed
In December 1953, John Backus wrote a memo to his manager at IBM that would fundamentally alter the economics of computing. Frustrated by the mechanical tedium of assembly language programming, he proposed something radical. A system where programmers could express what needed to be done. Not how to do it.
The sceptics were fierce. Hand-crafted assembly code was faster, more efficient, and gave programmers complete control. Why would anyone accept the 20% performance penalty of automated translation?
But Backus understood something his critics missed. The cost equation of computing was shifting. In 1953, computer time cost $300 per hour whilst programmer time cost $2-3 per hour. By the time FORTRAN shipped in 1957, that relationship was already inverting. Hardware was getting cheaper exponentially. Programmer costs were rising linearly.
The results were impressive. Development time collapsed from weeks to hours. Programmes that were economically impossible to write suddenly became possible. NASA could model trajectories. Banks could run risk analyses. Scientists could simulate weather patterns. The cost reduction didn't just save money. It enabled entirely new categories of solutions.
Every modern programming language traces its lineage to this moment. Each generation made programming cheaper, more abstract, more focused on describing the problem rather than the solution. Each cost reduction opened new possibilities we couldn't previously afford to explore.
Today, we're witnessing the same economic transformation, compressed from decades into months. Except the leap isn't from manual to automated. It's from human to artificial. To understand how dramatic this shift has become, we need to explore how AI coding has evolved. And where it's heading next.
The Four Eras of AI Coding
The evolution of AI coding has compressed decades of normal innovation into just four years. Each era hasn't replaced the previous one; they've overlapped and stacked, creating an increasingly sophisticated capability set that's transforming how software gets built.
The Copy-Paste Era (2021-2024)
When AI first entered programming, it functioned like a brilliant colleague who could only see one piece of the puzzle at a time. Copy your code into AI. Get suggestions back. Then paste them into your project. Test. Debug. Repeat. Developers could describe their intent in natural language, "Create a function that sorts user data by engagement score," and receive code suggestions. Sometimes it worked perfectly. Often it needed adjustment. It always required manual integration.
The process was tedious but transformative. Complex algorithms became immediately comprehensible through AI explanation. Debugging accelerated as AI could identify issues in isolated code segments. Natural language became a programming interface, reducing the mental overhead of translating business logic into code. Documentation time dropped. Knowledge transfer costs decreased.
Even with severe limitations including single-file visibility, no system awareness, and constant manual integration, there were meaningful productivity gains. Each limitation would become the blueprint for subsequent improvements.
The Vibe Coding Era (2024-2025)
The next breakthrough came from agentic coding tools that could see entire codebases, making coordinated changes across multiple files simultaneously. The process became conversational. You'd describe your vision. The AI would implement something. You'd respond with "make it more responsive" or "add social features here". Ultimately, the AI would understand and adapt. This back-and-forth "vibing" allowed anyone to build sophisticated software through natural dialogue rather than formal requirements documents.
This shift democratised software creation. Marketing departments could build their own tools. Sales teams created custom dashboards. The traditional IT bottleneck, where business needs waited months for technical implementation, began to dissolve. Small, cross-functional teams could accomplish what previously required dedicated development resources.
The cost implications were profound. The time from idea to working software compressed from months to days to hours. But the informality created new challenges. Scope creep, inconsistent code quality, and technical debt accumulating at conversational speed.
A pattern became clear. Vibe coding excelled at exploration but struggled with execution. Perfect for prototypes. Problematic when reliability matters. The market demanded more.
The Engineered Coding Era (2025-2026)
The current era combines AI capability with software engineering rigour. It starts with proper requirements documents, specifications, and quality standards. Organisations then deploy AI agents that mirror human structures. A primary coding agent supported by specialised sub-agents for security, performance, compliance, and documentation (to name a few). The critical innovation is parallel execution. Where human development requires sequential work, AI agents can tackle multiple aspects simultaneously.
Recommended by LinkedIn
Consider the economics. Five AI agents can build five features simultaneously whilst specialised sub-agents handle security, performance, and code quality in real-time. What once required sequential weeks now happens in parallel days. The compound effect means an individual developer becomes superhuman. Achieving what once required an entire team.
The workflow is evolving from human-writes-code to human-orchestrates-AI-teams. Developers specify requirements and validate outputs whilst AI handles implementation, testing and review. It isn't automation. It's multiplication of human judgement applied at machine scale.
The AI Software Engineer Era (2026 and beyond)
The future points towards AI as independent contributor rather than sophisticated tool. These "AI Software Engineers" will take business objectives and autonomously manage the entire software lifecycle. Interpreting requirements, architecting solutions, writing code, ensuring quality, handling deployment, and maintaining systems over time.
The economic transformation will be complete. The marginal cost of additional features approaches zero. Continuous delivery becomes truly continuous. Not daily or weekly releases, but constant evolution based on user behaviour and business metrics.
The Complete Economic Picture
The progression from today's AI coding tools to fully autonomous software engineers will drive one of the most dramatic cost collapses in software history. Just as FORTRAN delivered a step-change in development economics, each era of AI coding represents another step-change of cost reduction.
We can already see this transformation happening. Consider what the tech giants are reporting in their earnings calls. Microsoft's CEO Satya Nadella mentions that twenty to thirty per cent of their code is AI-generated. Meta expects half of their flagship AI model's development to be AI-written within twelve months. Google reports over 30% of new code is now AI-generated as of April 2025.
When an AI can generate in minutes what a developer produces in days, the economics don't just shift; they shatter. When the friction between idea and implementation disappears, behaviour changes fundamentally. Teams stop asking 'can we build this?' and start asking 'what happens when we do?’
But here's where the story gets complicated. If this transformation is so dramatic, why aren't all companies seeing revolutionary results yet? The answer reveals a critical divide.
Why Half of Companies Are Already Being Left Behind
Recent Stanford research led by Yegor Denisov-Blanch delivers important context to the productivity claims. Examining data from over 100,000 developers across hundreds of companies, their findings show that AI coding delivers real gains, but not uniformly across all use cases. Average productivity improvements hover around 20%. Meaningful progress, but far from the gains we've outlined above.
But dig deeper into the data and a stark divide emerges. Creating brand new software? 40% productivity gains, aligning with our projections. Documentation and testing? Even better. But updating existing company systems? Near-zero impact. For enterprises where most of the IT spending goes towards maintaining what they already have, current AI coding tools offer limited benefits.
This explains the gap between our projections and today's mixed enterprise experience. The productivity gains we've described are already happening, but only for certain types of work and certain types of organisations. If you're running a bank on forty-year-old systems; if your manufacturing depends on heavily customised software; if your business logic is embedded in spreadsheets that no one fully understands; current AI coding tools offer almost nothing. The revolution that's supposed to transform software development can't yet handle the software most large companies actually run.
Start-ups and companies with modern systems are already living in the future. They're seeing the 40%+ productivity gains, heading towards the 900% gains that the AI Software Engineer Era promises. They're building prototypes in hours that used to take months. They're exploring ideas that were previously too expensive to consider.
Companies anchored to legacy systems are locked in the past. They can't leverage AI's capabilities because their foundation won't support it. Every month, the gap widens. The companies that can build for thousands pull further ahead of companies that need millions.
The Capability Explosion Hidden in Plain Sight
But here's what these productivity reports miss. They're measuring a moment in time. Like photographing a rocket on the launch pad and concluding it can't fly. The enterprises dismissing AI coding because it can't handle their legacy complexity today are making the same mistake Blockbuster made when they tested Netflix's DVD-by-mail service and concluded it could never match their in-store experience.
The capability trajectory reveals a consistent pattern of relentless improvement at an accelerating pace. A year ago, AI coding tools couldn't fix their own mistakes, couldn't work on large projects, couldn't remember what they were working on, couldn't understand how different parts of a system connected, and couldn't generate proper tests for their code.
Every one of those limitations? Gone today. Not perfect, but competent.
Yes. AI still struggles with big-picture system design that requires deep business understanding. It can create overly verbose unperforming code. It struggles with complex regulatory requirements. It often creates code with security flaws. And legacy systems are a problem.
But solutions are coming (like Anthropic's recently announced agentic security reviews). Six months from now, many issues will either be solved; or if not, greatly reduced.
The question isn't whether AI can handle your legacy systems. It's whether you'll be ready when it can. Because these capabilities only improve, and they're improving faster than most organisations can adapt.
Adapt or Die
The economics of software creation have already changed. The capabilities are expanding. The costs are collapsing. The transformation is accelerating. The companies that recognise this shift and restructure around software abundance will thrive. The companies that remain organised around software scarcity will struggle to compete.
We've established the technological reality. A 90%+ collapse in software creation costs is on the horizon. We've seen how this echoes the FORTRAN revolution but compressed from years into months. We've examined the productivity gains and identified which organisations are best positioned to capture them today versus those trapped by legacy constraints.
But understanding the technology is only half the equation. In the next newsletter, we'll explore how this cost collapse will trigger a fundamental reconfiguration of competitive advantage, and most critically, which strategic moves separate tomorrow's winners from those about to be disrupted.
The technology reality is clear. The strategic implications will reshape every industry that depends on software. Which is to say, every industry.
You nearly lost me at the post language, however the article is very well grounded. I agree the cost from a time perspective is reducing however, compute and quality data to seed the functionality of the software isn’t coming down at the same rate. Also enterprise grade software while the initial output is 100x faster, the compliance, ethical review and security reviews for now are taking longer as we go through the creation of new frameworks to manage code authored by artificial intelligence. I look forward to the next piece
Honestly AI is something else, big game changer for non technical builders
Fantastic!
Love this, James - a great a blog, I’m excited to see the next one!!☝️