AI Coding Tools Have a Developer Productivity Problem

AI Coding Tools Have a Developer Productivity Problem

Engineers try AI coding tools, build a prototype, and go back to their old workflow.

I use these tools every day. Started at Cursor, moved to Codex, and now Claude Code. They have changed how I write code, documents, and presentations. Many engineering and product leaders I know are working to bring that same productivity to their teams.

But the adoption curve is slower than anyone expected. People experiment, maybe build something from scratch, then return to how they worked before. Not just in classic software companies. I hear the same from friends in healthcare, real estate, and other fields. Many organizations are finding that adoption takes more than access.

Over the past year, I have been hearing from ICs across seniority levels about their experience. The answers are remarkably consistent.

What engineers keep telling me

The demos look effortless. Someone builds an app from a prompt in 20 minutes. But those demos start from zero: no existing codebase, no legacy dependencies, no production constraints. Real codebases are different. APIs with undocumented prerequisites get called out of order. Error codes are inconsistent, so the AI cannot recover. There is no validation layer, so the tool reports success when nothing actually worked. Runs are slow and expensive.

Stack Overflow's 2025 survey found that 80% of developers use AI tools but only 29% trust the output, down from 40% the year before.

Beyond the tooling gap, engineers often do not know where to apply these tools in their daily work. Debug? Write tests? Refactor? Deployment? Without someone showing them a specific workflow that saves real time on their actual tasks, they try it a few times and move on.

And the learning curve is steeper than expected. Several experienced developers told me they are actually slower using AI tools on codebases they know well, despite expecting to be faster. It is like learning a new programming language, a new way of implementing code. The cognitive overhead of prompting, evaluating output, and deciding what to keep outweighs the benefit until you build the muscle.

None of this means the tools are not capable. It means AI productivity tooling requires platform changes, the same way CI/CD required pipeline infrastructure before it could deliver on its promise.

We have seen this before

This pattern looked familiar to me. Engineers have access to a powerful tool. The tool works in controlled conditions. But in the actual production environment, it does not perform as well, and adoption stalls.

This is a developer productivity problem.

The developer productivity discipline exists because the industry learned, over decades, that access to better tools is not enough. It takes organizational investment to make those tools productive in a specific environment. Published research on developer productivity has consistently found that non-technical factors, like peer support, enthusiasm, and quality feedback, predict productivity more than the technical tools themselves. The tools matter. But the environment around them matters more.

I have seen this firsthand. The teams that got the most out of new tooling always had someone who owned the gap between what the tool could do and what engineers actually needed.

You need a champion

Technology adoption depends on opinion leaders, those engineers voluntarily follow, who see the benefit and help others get there.

In practice, that means every team benefits from a champion who does three things:

Makes the platform AI-ready. Production codebases were not built with AI agents in mind. Someone needs to configure the tools for the specific codebase and infrastructure, build validation layers so the AI does not silently produce broken output, and set up guardrails that make the results trustworthy.

Discovers workflows that actually save time. A developer who reviews pull requests all day needs a different AI workflow than one writing infrastructure from scratch. The champion figures out what works for their team's real tasks, not what works in a demo, and shows people.

Iterates based on what is landing. Measure what engineers are actually adopting, what they are abandoning, and why. Adjust. This is what developer productivity teams have always done. AI tooling is the next domain that needs it.


The models are getting more capable every quarter. The tooling is maturing. But the gap between "available" and "productive" is an organizational challenge, and it is one this industry knows how to close.

Who is your team's AI productivity champion?

Opinions expressed are my own and based on discussions with people across the tech industry.

80% adoption but 29% trust... that tells you everything about where the real work is. It's not the tool, it's the integration into the actual workflow.

Like
Reply

Itay, you've nailed the core tension. The productivity problem isn't that AI tools are bad — it's that most teams adopted them without changing their workflows. You can't just plug AI into a legacy process and expect 10x gains. The orgs winning with AI coding tools are treating them as a *system* change, not a tool change: they've restructured how they do code review, onboarding, documentation, and sprint planning around AI-native workflows. The biggest unlock? Getting everyone on the team to a baseline level of AI prompting fluency so you're not bottlenecked by one "AI whisperer." What patterns are you seeing in the teams that DO get productivity gains?

Like
Reply

Time is running out for companies that are still struggling with AI productivity issues. The gap is widening between those companies with dev teams that are embracing and extending their capabilities. Just last week, I finally gave up reviewing AI Pull Requests (PRs). I still require the AI to make them for tracking, but now they get merged. I know teams that don't even do PRs, just merge straight to main, and use commit hashes for tracking. I marvel at their courage. The future is for the bold engineers, not the timid. The same goes for the enterprise.

To view or add a comment, sign in

More articles by Itay Sharfi

Others also viewed

Explore content categories