Same AI Tool. Same Codebase. Completely Different Results.

The AI adoption problem nobody warned us about.

18 months ago, we gave our entire engineering team access to Cursor AI on the same day.

Same tool. Same codebase. Same stories.

Within two weeks, we discovered a problem we hadn’t anticipated.


Our senior engineers were generating clean, well-structured, architecturally sound code.

Our junior engineers were generating code that worked.

Technically correct. Structurally wrong.

And the difference wasn’t skill anymore.

It was prompting.

The senior developer knew exactly what context to provide: which patterns to reference, which constraints to enforce, and which assumptions the AI should never make.

The junior developer typed what made sense in the moment.

And Cursor brilliant as it is gave them exactly what they asked for.

Not necessarily what we needed.

Soon we found ourselves reviewing PRs late at night, trying to explain to a junior engineer why their AI-generated code was “wrong.”

All the tests passed. All the requirements were met.

So what was the problem?

Architecture. Design intent. System thinking.

And those things are very hard to explain when someone just watched AI build a feature in 40 minutes.

That’s when I realized something important:

We hadn’t adopted AI.

We had democratized inconsistency.

The real issue wasn’t the tool.

It was our assumption that a prompt is just a prompt.

It isn’t.

A prompt is a mirror.

It reflects the level of context, clarity, and engineering craft the person writing it brings.

We had eight developers.

Which meant we had eight different mirrors.

And our codebase was starting to look like it.

So I stopped asking:

“How do we make everyone better at prompting?”

And started asking a different question:

“How do we remove human variability from the parts that shouldn’t have any?”

The answer turned out to be process.

Not training. Process.

We connected Cursor AI directly to our work management system.

Now when a developer picks up a story, the AI reads the acceptance criteria directly from the ticket not the developer’s interpretation of it.

From that single source of truth, the pipeline begins.

First, the AI generates a Story Analysis: edge cases, dependencies, scope boundaries, and questions that need clarification.

Then it follows TDD. Tests are written first always.

Next, it produces a High-Level Design document: component breakdown, data flow, architectural decisions, and reasoning.

That design document comes to me for review.

Nothing moves forward until the architecture is approved.

Only then does implementation begin.

When I first presented this approach, I expected pushback about complexity.

Instead, one of my senior engineers went quiet.

Then he said something honest:

“This feels like you’re replacing us.”

I understood the concern.

So I told him:

“I’m not automating your job.

I’m automating the part that was never really yours to begin with the mechanical translation from requirements to code.

What’s yours judgement, architecture, and deciding what we should build and why still needs you.

And it always will.”

It took about three months for the team to fully trust the system.

There were debates. There were failures. There were moments when someone said, “I told you so.”

We fixed the process and kept going.

Today, about 80% of our feature code moves through this pipeline.

Productivity is up 60%.

But the result I’m most proud of isn’t the productivity gain.

It’s what happened to our junior engineers.

Every story they pick up now includes a Story Analysis, a TDD structure, and a high-level design that a senior engineer has approved.

They’re not just writing code anymore.

They’re learning to think like architects.

The gap between junior and senior engineers started to close.

Not because AI made them senior.

But because the process consistently gave them the context seniors carry in their heads.

The engineer who once worried AI would replace him still jokes about that conversation.

But now he knows what his job actually is.

And it’s the most interesting version of engineering he’s had in years.

AI didn’t level up our team.

Building a system that made AI consistent, governed, and trustworthy did.

The tool was never the answer.

The system around the tool was.

What’s the hardest part of AI adoption your engineering team is facing right now?

#EngineeringLeadership #FrontendArchitecture #AIEngineering #SoftwareEngineering #Angular #WebDevelopment #DevProductivity #TDD

Article content


democratised inconsistency #cursor Happy to share more on how we structured the MCP connection to ADO/JIRA and the Story Analysis template if useful — drop a comment or DM me.

Like
Reply

To view or add a comment, sign in

More articles by Sudipta Sarkar

Others also viewed

Explore content categories