Does AI really change the rules of building software?

Does AI really change the rules of building software?

AI is everywhere, but what’s it really like on the frontlines of AI implementation? Get into the daily thoughts and challenges faced by AI engineers – the real stuff that happens when AI meets actual digital products.

Weekly AI Bites is a series that gives you direct access to what’s happening in our day-to-day AI work. Every post comes straight from our team’s meetings and Slack, sharing insights, tests, and experiences we’re actively applying to real projects.

What models are we testing, what challenges are we tackling, and what’s really working in products? You’ll find all of this in our bites. Want to know what’s buzzing in AI? Check out Boldare’s channels every Monday for the latest weekly AI bite.


Article content

Instead of immediately answering the question of whether AI changes the rules of building software, it is worth – in a Socratic spirit stopping for a moment and simply starting to ask questions. The reflection starts from Robert C. Martin’s well-known statement:

The rules of software are the same today as they were in 1946, when Alan Turing wrote the very first code that would execute in an electronic computer. - Robert C. Martin

What do we actually mean when we say “rules”? Are we talking about the physics of computation and algorithms, or rather about the practices of craftsmanship: architecture, responsibility, the way decisions are made, and how human work is organized?

Who is a programmer today?

This leads to another question: who is a “programmer” today? Is it a person who physically writes lines of code, or rather someone who makes decisions that code merely materializes? If parts of a system are generated by a model, responsibility does not disappear, someone is still accountable for the consequences of those decisions, for the safety, stability, and overall coherence of the solution. AI can write code, but it does not bear the cost of its mistakes.

Does AI change the rules – or only the pace?

At this point, it naturally raises the doubt whether AI truly changes the rules of the game, or merely accelerates the pace. Debugging is still necessary, understanding the domain has not stopped being essential, and bugs have not disappeared – they simply appear faster and often in more subtle forms. Likewise, it has not changed that requirements are often incomplete, users do not always know exactly what they want, and systems live longer than their creators. If these foundations remain unchanged, it is hard to argue that AI genuinely changes the rules.

Does a new tool change the nature of the problem?

We can look at this even more broadly and ask what Turing would say if he saw today’s models. Would he consider them a new language, a new compiler, or simply another tool between the idea and the machine? History teaches us that changing the tool rarely changes the nature of the problem. We still have to deal with the same complexity, responsibility, and consequences of decisions – regardless of how powerful our tools become.

Which rules truly do not change?

So if someone claims that the “rules” are changing, it is worth asking: which ones exactly? Has separation of responsibilities stopped mattering? Has coupling suddenly stopped hurting? Has complexity become free? Nothing suggests that this is the case. From a practitioner’s perspective, the rules remain unchanged: code must be unambiguous for the machine, complexity grows faster than functionality, changes become more expensive the later they are introduced, and bugs are inevitable and must be actively detected. AI does not violate any of these rules.

What does AI actually change in practice?

What truly changes is the economics of software production. The cost of producing code goes down, the speed of experimentation goes up, and the amount of code in systems grows very quickly. At the same time, the cost of understanding a system, maintaining it, the consequences of poor architectural decisions, and responsibility for how a product behaves do not change. This distinction becomes crucial today.

Where is the bottleneck today?

The most important change is the shift of the bottleneck. In 1946, the main limitation was simply writing code. Today, in the world of AI, the bottleneck becomes precise problem formulation, evaluating the correctness of solutions, and integrating them into existing systems. AI does not remove the bottleneck – it merely moves it elsewhere.

The illusion of progress and accelerated system decay

This shift also has a darker side. AI can generate huge amounts of code, often locally correct but globally inconsistent. Without engineering discipline, systems can degrade faster than before. An illusion of progress appears: code is created faster, cheaper, and with less effort, but in the background architectural chaos grows, responsibility becomes blurred, and bugs become increasingly subtle.

How is the role of the engineer changing?

As a result, the role of a good engineer changes significantly. Today it is less about writing code itself and more about reading, thinking, assessing quality, breaking problems down, verifying, and making conscious decisions. These are still the same rules as before – they just operate faster and more relentlessly.

Perhaps the hardest question, then, is not whether the rules are changing, but whether we are ready to change the way we are needed in this process. And that is why Martin’s quote remains valid: AI does not change the rules of building software – it accelerates the consequences of breaking them.



To view or add a comment, sign in

More articles by Boldare

Others also viewed

Explore content categories