Software Engineering: what AI changes, and what it doesn’t

I’ve been creating software professionally for over two decades: both as an individual, and as a leader of teams.  I’ve lived through major technology shifts before, but we all know this AI shift is different. We’re still early in the transformation, but I wanted to share some thoughts on what will change, and what won’t. I hope this is useful to anyone aspiring to be a great software engineer today and into the future.

Engineering ≠ Coding: now more than ever

Mechanical engineers don’t assemble motors. Civil engineers don’t pour concrete. Yet for decades, people who called themselves software engineers spent much of their days typing code. Going forward, software engineering will look more like classic engineering disciplines: more focus on requirements, constraints, guardrails, and quality, and less time on direct implementation.

Software is so foundational to modern life it must be engineered.  Even if you don’t write the code yourself, as an engineer you are responsible for the systems you build.  You need to make sure you're meeting business requirements.  You need to be able to diagnose and mitigate livesite incidents promptly.  You need to reason over a system, its dependencies, identify when they should be improved.  You need to build the right quality gates, so systems stay correct, reliable, available and performant even as they evolve over time.

A corollary of this: if you're still focused on coding without contributing to quality, testing, reliability, and operability, you're missing the most important and enduring parts of the job.

Code quality, tech debt, and human code review: does it still matter?

In the past, code readability and review were critical because code was written for other humans just as much as for machines.  Pride of ownership was sometimes sufficient when an individual owned an area of code and maintained oversight of changes.  Today however, AI models are a critical stakeholder and ally, doing more and more of implementation over time.

I've asked myself: is an engineer in 2026 that reads, reviews, tweaks prompts to influence AI-generated code morally equivalent to a coder needlessly hand-optimizing compiler output in 2020 (i.e., a hubristic perfectionist)?  I don't think so, at least not yet.

An engineer owns their deliverables.  High standards for code quality and tech debt still matter.  Making sure your code & architecture remain understandable still matters.  Having opinions about these things helps you develop taste: what good and bad code & architecture look like, and why.  And, building your codebase for repeatable patterns friendly to humans and AI will help you, your team, and your codebase scale.

Learning through effort in an AI-steamrolled world: stay curious

Many of the lessons that shaped me professionally were learned the hard way: trying, failing, and trying again.  The pain of mistakes pushed me to avoid repeating them and make new mistakes instead.  In a world where AI does a lot of the iteration and decision making on your behalf, how do you continue to grow as an engineer?

Technology has steamrolled away so much of the friction in modern life that it can make you believe if something is hard, you’re doing it wrong.  But the complex and confusing parts are where growth happens.  One of core virtues of an engineer is curiosity.  Try to understand the why behind the business problems you're solving.  Dig into how a framework works or why an AI produced a wrong output.  Learn more, talk with colleagues, apply those learnings, and repeat.

That's how you grow, and how you'll add value to your team and yourself, regardless of how the tools evolve.

Great perspective Denesh Pohar. AI changes how we code, not the responsibility of being an engineer.

Smooth read, thanks Denesh. Curious how you think this impacts new grads specifically. It feels like early-career development may need a rethink, similar to how civil and mechanical engineering have long trained grads to design systems and lead teams while others execute the manual work. The difference I keep coming back to is scale. In civil and mechanical fields, there’s a real ceiling on how many humans one experienced engineer can manage, which creates ongoing demand for more engineers to lead people and projects. With AI agents, that constraint changes. An experienced engineer can direct dozens or even hundreds of agents far more easily than managing humans. It will get to a point where we can leverage faster through AI than through mentoring junior talent. That’s where I start to worry about new grads. If execution is automated the traditional path of hiring juniors as future people managers may shrink. Unless we intentionally redefine early-career value around orchestration, systems thinking, and judgment, it feels like the pool of new-grad opportunities could structurally diminish.

Great article and perspective Denesh. Definitely worth the read!

To view or add a comment, sign in

Others also viewed

Explore content categories