The Hardest Part of Software Delivery Was Never the Code
There is an emerging realization as organizations deepen their embrace of AI: the tools we are building are extraordinarily good at solving a problem that was never the real problem.
AI coding assistants have changed the speed of execution. For junior engineers especially, they remove friction around syntax, boilerplate, debugging, and generating a first draft of a solution. This is genuinely useful. But software projects have rarely failed because code was written too slowly.
Anyone who has delivered real programs in the wild knows this. The failures almost always happen earlier, at a layer that is not touched by any code (yet!).
A leader approves the wrong project.
A product owner prioritizes the wrong feature.
An architect misunderstands how two systems will actually behave in production.
Sometimes a team solves the wrong problem with remarkable efficiency. The code ships faster! But the business value does not.
Judgment doesn’t live in engineering. It lives in governance.
When we talk about where programs fail, we are really talking about the governance layer: portfolio decisions, program structures, planning assumptions, stakeholder alignment. That is where the damage is done, often weeks or months before a single line of code is written.
Consider a scenario I have seen play out more than once in different forms. A platform modernization program gets approved at the portfolio level. The business case is compelling on paper. The timeline is aggressive but not impossible. Three months in, the program manager discovers that two of the core upstream systems were never formally confirmed as integration-ready. The dependency was flagged once in a risk register, assigned a low probability, and never revisited. By the time the issue surfaces in a steering committee, the program has already consumed 40 percent of its budget building toward an integration point that cannot be implemented.
The code was fine. The delivery teams were executing well. The failure was already locked in at the governance layer before the first sprint began.
That risk did not hide. It lived in a RAID log. It just never received the scrutiny it deserved because the decisions made upstream had already created a momentum that was hard to question.
Years have been spent accelerating the wrong layer.
Organizations have invested heavily in delivery acceleration. DevOps tooling compressed release cycles. SAFe ceremonies created the appearance of alignment at scale. Agile frameworks promised adaptability. And for a period, those investments made sense because delivery friction was genuinely high.
Recommended by LinkedIn
But the pattern I have watched repeatedly across application migrations and platform modernization programs is: the faster organizations got at delivering, the faster they delivered the wrong outcomes. The constraint was never throughput. It was judgment applied at the point where commitments are made.
AI coding assistants are powerful. But if they are the primary answer to a question about software program success, we are still looking in the wrong place.
AI is entering the governance layer.
That is where it gets interesting.
As a program manager who has worked to build genuine AI fluency, I use AI differently than the engineers or business analysts on my teams. Not to write code or generate documentation as an afterthought. I use it at the layer where programs actually break.
Before a program steering committee presentation, I will use AI to pressure-test the business case. Given the assumptions, dependencies, timeline, I ask it to find the gaps in my logic before an executive does. I use it to simulate the objections a skeptical sponsor might raise, so I have already thought through the responses. I use it to ask the question that is easiest to skip: what are we not seeing?
That is not AI delivering code faster. That is AI as a thinking partner at the governance layer, where the decisions that shape programs are made and where the conditions for failure are either created or avoided.
Using AI to accelerate execution at the delivery layer is useful. Using AI to improve judgment at the governance layer is transformational.
The original observation that sparked this article was right: AI may make coding dramatically easier, but the hardest part of software delivery was never the code.
The hard part has always been the decisions made before the code. Who approved this? What were we actually optimizing for? What did we assume without verifying? What risk did we log and never revisit?
AI can help answer those questions if the people responsible for those decisions choose to use it that way. It can surface assumptions that go unexamined. It can challenge plans that have never been seriously stress-tested. It can give a program manager the preparation that used to require a room full of experienced advisors.
The question worth asking is not whether your engineers have access to AI coding assistants. The question is whether the people making portfolio decisions, approving business cases, and running steering committees are using AI at the layer where it can actually change outcomes.
What layer are you using AI at?
Love the thought process Indranath Mitra . Great insight and perspective on fixing the governance aspect of the execution.
Thanks, Indranath Mitra. This clarification makes the framing crystal clear. I like how you’ve positioned it around sequence and priority, not exclusion. Completely agree that the RAID risk pattern in platform and core migrations deserves a deeper dive. A focused piece on that would hit home for many of us in cards, payments and finance programs.
Only comment is Bitcoin. Zero design flaws, zero worry about first to market and zero outsourcing using capitalism. The hardest problem only occurs when the software is released early only to make a buck.
This article is excellent Indranath Mitra 👏👏👏👏 .. now think about in your scenario if the company had a well kept lessons learned knowledge base..... you can instruct your LLM to take the role, in this case "act as an outstanding Program Manager just like Indra. Take a look at the repository for company knowledge base. Given the program details provided (see attached) what are the major issues i may encounter? Now if let's say you project that had this issue had completed or was canceled but the historic data is there, then the LLM should raise it up as RISK... Well done sir!!! 🤠