Execution Is Cheap. Intent Is Scarce.
AI and the New Bottleneck in Software Development
Since leaving Meta in 2025, I’ve spent time exploring modern AI engineering tools and following the work of others doing the same. A pattern has emerged:
We are in the middle of a productivity discontinuity.
AI is not just another incremental tooling improvement. It meaningfully compresses the cost of execution in software development. The analogy that feels closest is the spreadsheet revolution — but at greater scale and speed. Like spreadsheets, AI will displace work. It will not eliminate humans, but it will fundamentally change how knowledge work is performed.
The key question is not whether AI changes software development. It already has.
The real question is: What becomes the bottleneck?
To explore this topic let's use the Software Development Lifecycle (SDLC) as an example.
A Framework for Thinking About AI in the SDLC
Consider a simplified SDLC:
We can evaluate each phase across four dimensions:
Consider this thought exercise level assessment per dimension as it relates to the SDLC:
Clearly we can debate specific numbers, but a pattern emerges from that is illustrated by the fact that there are two core variables to consider, what % of the SDLC can be automated and what % cannot. We use the numbers above as a starting point not as a definitive statement of where this will all ultimately net out. In the above thought, however, we can see:
Today’s frontier models — as highlighted by recent industry commentary — are increasingly capable of architectural reasoning, multi-step planning, and end-to-end code generation. That absolutely pushes automation deeper than we might expect, possibly continuing to push into 40% territory.
But even if AI absorbs most execution and parts of design, one layer remains:
The bottleneck in software development is not code generation. It is validated intent and accountable decision-making.
AI can optimize against a goal. It cannot originate legitimate goals within an organization.
What will the future hold?
It is reasonable to challenge how much of the 40% is as protected for humans as we might think.
AI is increasingly capable of:
That likely increases the 60% to something higher over time.
That said, even in a world where AI reasons flawlessly, three constraints remain:
Recommended by LinkedIn
These are not computational problems. They are fundamentally social and organizational problems that should be decided by humans. Once machines start to decide these types of problems we will need to start having discussions about how to legislate AI rights or getting ready to arm ourselves for judgement day.
Strategic Implications for Engineering Teams
In the long run, if execution becomes as cheap as it continues to, we have three options ahead of us, A through C. I would argue that today most companies are embarking on Option A because it’s the easiest short term, but I think the companies of the future that are most successful will learn how to operate in the realm of Option C.
Let’s take a super quick look at these options:
Option A: Shrink Teams, Improve Margins
In this model, execution per developer improves. Reductions in headcount cause costs to decrease and product scope remains constant.
This option captures efficiency but it doesn’t unlock new growth opportunities.
Option B: Keep Teams, Increase Scope
With option B we could see companies maintain their headcount levels, but decide to expect more output from every existing team.
This cou;d lead to increases in scope, product extensions, and growth, but this will come at a cost of increased coordination complexity.
Option B might be a middle state for some companies but most teams will likely revert to Option A or move into Option C as this stage is likely unsustainable for engineering teams.
Option C: Deploy Very Many AI-First Teams, Don't Reduce R&D Spend
Option C is where companies choose to deploy many, smaller, highly autonomous teams. This allows for lower coordination overhead, increases parallel intent formation, and lets companies deploy more product features, product lines, and product offerings.
If AI compresses execution cost by 50–80%, the constraint shifts to how many coherent bets an organization can form and validate in parallel.
This suggests a counterintuitive outcome:
The companies that win in the long run may not be those that shrink R&D. They may be those that multiply focused teams. Companies that get stuck on Option B or only pursue Option A will likely be left in the dust by companies operating in an Option C model.
Bottom Line: AI should not eliminate software engineering, it should drastically alter it.
AI changes what is scarce. With AI in the limit:
If AI pushes well beyond 60% automation, the enduring constraint is not “design” or “coding.”
It is the human capacity to:
Unless machines acquire autonomous, legitimate intent — which opens an entirely different philosophical debate — software development will remain bounded by human decision authority.
The real strategic mistake would be treating AI purely as a cost-reduction tool.
The opportunity is not larger margins via cost cutting. The real pie growth opportunity is more directed intent per unit of coordination.
Execution is getting cheaper.
Intent is not.
Good thoughts... it will probably increase scope and shrink margins, those aren't dualistic options IMO. As far as the PDLC, I anticipate in 12 months AI will be deciding all strategic opportunities, determining test plans, and running a continuous test/optimization loop with humans as the 'checkpoints' rather than the arbiter. Crazy times.
Very interesting - thanks for writing. Your observations are consistent with mine. Especially, 1) As execution gets cheaper the bottleneck shifts upstream into validated intent and accountable decision-making. And 2) the long-run winners may be the firms that don’t cut R&D, instead they multiply small, AI-first teams to run more coherent bets in parallel rather than chasing margin via headcount reduction.
Interessant, dass du SDLC als Beispiel nimmst. Glaubst du, die eigentliche Grenze liegt vor dem Code – bei Problemdefinition, Bedeutung, Verantwortung?
Interesting article!! As a student, it makes me wonder how higher education should adapt to encourage futuristic/intentional thinking for computer science majors working in an AI-industry. It seems like different skills (especially in the business realm) will be of greater importance as AI develops.