The Junior Engineer Question
For the last three issues we've spent most of the time talking about what AI is doing to senior engineering work. April produced two pieces of evidence that the more interesting question might be at the other end of the experience curve. It’s something we have thought about in-house and it feels worth writing down.
What happens to a junior engineer when nobody's watching
Anthropic published a controlled experiment earlier this year that's been making its way slowly through engineering leadership conversations. They took 52 mostly junior software engineers, all with at least a year of regular Python experience, and asked them to learn Trio, an asynchronous programming library none of them had used before. Half were given an AI assistant integrated into the coding environment. The other half worked with documentation and web search only.
They were then handed a comprehension quiz immediately after the task so see how much of their work they truly understood. The AI-assisted group averaged 50 per cent. The unassisted group averaged 67. A gap of roughly two letter grades, with debugging skills showing the steepest decline.
The more interesting finding sat inside the AI group. The researchers looked at how participants actually interacted with the assistant. Engineers who used AI as a delegate, generating code and moving on, scored below 40 per cent. Engineers who used AI to ask conceptual questions while continuing to write code themselves scored 65 or higher. Same tool. Same task. Same access. The variable was what the engineer brought to the interaction and what the interaction left them with afterwards.
If junior engineers using AI as a delegate don't develop the comprehension needed to validate AI-written code, the cost can remain unseen for some time. It appears two years down the line, when those same engineers are reviewing pull requests, debugging incidents, and making architectural calls without the foundation to do any of it well.
Microsoft's engineering leadership has been calling this a seniority-biased technology shift. The same tool, used the same way, makes senior engineers faster and can make junior engineers slower. The mechanism is the one Anthropic's experiment surfaced. AI is good at producing code. It's neutral on whether the engineer next to it understands what's been produced. That part still has to be designed in. The inquisitive engineer is the one who embeds comprehension as part of their workflow. The onus is on them, rather than the AI.
The Amazon example
In early March, two AWS outages were traced to AI-assisted code that had reached production without proper review. The second one caused roughly a 99 per cent drop in U.S. order volume and around 6.3 million lost orders. Amazon's response was a 90-day code safety reset across 335 critical systems, with a new written policy: AI-assisted code changes must be approved by a senior engineer before deployment.
The policy is doing the same work the Anthropic experiment describes, just at a different scale. It's an engineering organisation acknowledging that AI output, validated by someone without the comprehension to check it, costs more than the time it saves. The experiment shows the mechanism at the individual level. The policy is the institutional response.
A development problem, not a hiring problem
The conversation around junior engineers in 2026 has mostly been framed as a hiring problem. Companies have been cutting entry-level roles, the assumption being that AI does the work juniors used to do, so the headcount can come out of the budget.
We don't think it's a hiring problem. It's a development problem, and treating it as the first quietly creates the second.
For most of software's history, junior engineers developed by writing code, getting it reviewed, breaking things, and absorbing context from the colleagues who fixed what they broke. The apprenticeship loop assumed that writing the code was the primary unit of learning, and that someone more experienced was close enough to the work to catch what the junior missed.
AI changes both ends of that loop. The mechanical work that taught juniors how systems actually behave now happens in seconds, performed by a tool that doesn't explain itself. Senior engineers have less time per pull request because the volume has gone up. And if the team has trimmed the mid-tier in favour of "senior plus AI", which much of the industry has been doing, the bridge between junior and senior thins out.
The Anthropic data is essentially a measurement of what happens when a junior engineer goes through that loop without the structure that used to come with it. The Amazon policy is what an organisation does when it can no longer afford to assume the structure exists.
Recommended by LinkedIn
How we think about it at MWS
We've watched this play out from a particular angle, because we've been deliberately structuring our teams the other way around for over a decade.
Our juniors are all in. They sit inside architecture discussions, design reviews, the back and forth before anyone writes a line of code. Juniors are in the room for all of it. Sometimes they're contributing, although more often than not, they're listening. Either way, they're learning how senior engineers think before they're asked to think that way themselves.
What that produces, and what we think clients actually value when they work with us, is continuity. The junior who joined a client engagement three years ago is often still on it today, now operating at mid or senior level, with three years of accumulated context about that client's systems, decisions, and trade-offs. They know which corners were cut and why. They remember the incident from eighteen months ago that explains why a particular service is structured the way it is. What’s more, they have sat through the conversations where the architectural calls were made, often as the most junior person in the room, and they understand the reasoning rather than just the outcome.
That continuity, in the case of nearshore engineering, is the actual product. The mentorship structure underneath it is what makes it possible. Senior engineers who teach end up working with juniors who stay, and juniors who stay end up becoming the seniors who teach the next cohort. The loop is where the gold lies.
We supplement it with internal coding workshops run by our own engineers. We work beyond language tutorials or framework introductions, which are everywhere. Instead, we discuss how best to scope a problem when the requirement is vague or how to spot the assumption hidden inside a ticket. How to read a codebase you didn't write … The things you only learn by getting them wrong, taught by people who've made the mistakes already.
We've watched plenty of engineers join us as juniors and grow into roles where they lead engineers of their own. The thing that made the difference in each case wasn't a single course or moment. It was the steady accumulation of being in conversations they weren't yet qualified to lead, until one day they were. And sometimes taking a risk - something AI is averse to. By that point they're not just senior engineers in the abstract. They have grown into senior engineers who know our clients' systems as well as anyone inside the client's own organisation.
For the client, that's the practical pay-off. The senior engineer reviewing AI-generated code on a project is, in most cases, the same person who was writing junior-level code on that same project two or three years ago. They know what the code is supposed to do because they helped decide it. The kind of mistake the Anthropic experiment measured, accepting an AI suggestion without the comprehension to validate it, is much harder to make when the engineer has been living with the system for years.
On Sofia, and the engineering culture around us
The other piece of this is geographic. Bulgaria has been a serious engineering culture for longer than most people outside it realise, and the talent pipeline coming out of the Technical University of Sofia, where a number of our engineers studied, sits at the centre of that.
The curriculum is heavy on mathematics, fundamentals, and first-principles thinking, which produces engineers who can hold a system in their head and reason about how it actually behaves. That foundation matters even more in this new engineering loop. The cultural piece around it is harder to put into words yet just as important. Bulgarian engineers are taught to be curious, to question the status quo, and to express their thinking through their code. They aren't encouraged to defer or, in the case of those in the anthropic experiment, to pass code they don’t understand.
That orientation produces something specific in the AI era. Engineers who don't accept a suggestion just because it compiles. Engineers who want to understand the why before they accept the what. The Anthropic experiment essentially measured the gap between those two ways of relating to AI output, and we've watched the same gap show up in our work for years. It is clear we know which side of it we want our engineers on.
Closing
The teams that will be in good shape eighteen months from now aren't necessarily the ones with the strongest senior bench today. Instead, we believe it will be the ones whose seniors are expected to teach as part of the job, and juniors are expected to understand what they ship rather than ship it. It is a sustainable model that will lead to brighter futures for both our clients and our engineers. That structure is harder to assemble than it sounds, and it doesn't show up in any productivity dashboard. But it's where senior engineers come from. It always has been. The companies that forget that might just find out the hard way.
Speak to you next month.
MWS