Your team shipped 76% more code last year. Your outages also went up. Developers wrote an average of 7,839 lines of code in 2025, up from 4,450 the year before. Median PR size jumped 33%. Files got 20% denser. And according to an analysis of vendor status pages, system outages have climbed steadily since 2022. We got faster at producing code. We did not get faster at producing working software. There's a popular narrative that AI-assisted development will inevitably drown us in low-quality slop. More code, worse code, forever. But that ignores how markets actually work. Generating clean, simple code costs fewer tokens than generating tangled messes. Maintaining readable code is cheaper than debugging spaghetti. The economics point toward quality, not away from it. John Ousterhout nailed it years ago: complexity is the primary enemy of software. Good code is simple and modifiable. Bad code demands context that no one — human or AI — wants to carry. Right now we're in the messy middle. The incentive structure rewards shipping fast: users get features, model providers bill tokens, developers skip review. But competition among AI models will eventually punish the ones that produce expensive-to-maintain output. The real question is not whether AI code will be good. It's whether engineering teams will have the discipline to demand it before the market forces them to. #AI #SoftwareEngineering #CodeQuality #DeveloperProductivity #AITools #StartupLife #TechLeadership Join Agentic Engineering Club → t.me/villson_hub
Dmytro Diachenko’s Post
More Relevant Posts
-
Developers shipped 76% more code in 2025 than the year before. And software got worse. Lines of code per developer nearly doubled — from 4,450 to 7,839. Median PR sizes jumped 33%. Every metric that measures output went up. Meanwhile, software outages have climbed steadily since 2022. Not a coincidence. Here's what's happening: AI coding tools made it trivially easy to produce code. So everyone produced more of it. But nobody changed the incentive structure. Developers still get rewarded for shipping features, not for writing code that doesn't break six months later. The counterintuitive part — and the part most teams are missing — is that good code is actually cheaper to generate with AI than bad code. Simple, well-structured modules need fewer tokens to maintain, fewer tokens to debug, fewer tokens to extend. Complexity compounds in token cost the same way it compounds in human hours. John Ousterhout called this decades ago in A Philosophy of Software Design: complexity is the root cost of all software. AI didn't change that equation. It amplified it. The companies that figure this out first will spend less on compute, ship more reliable products, and move faster than competitors drowning in their own AI-generated spaghetti. More output was never the goal. Better systems were. #AI #SoftwareEngineering #CodeQuality #DevProductivity #StartupLife #TechLeadership #AgenticAI Join Agentic Engineering Club → t.me/villson_hub
To view or add a comment, sign in
-
-
The IDE has been the central nervous system of software engineering for 30 years. But we are officially entering the 'Post-IDE' era. We're moving from tools that assist us to autonomous agents that act as collaborative partners. This isn't just about better autocomplete; it's a fundamental shift in the developer's role. Key shifts to watch: - From Syntax to Intent: Coding is becoming a high-level reasoning task rather than text manipulation. - From Editor to Architect: Developers are evolving into 'Reviewers-in-Chief,' orchestrating intelligent systems. - Repository-Wide Context: Agents now index entire codebases to understand dependencies and business logic, not just the open file. While the efficiency gains are massive, the challenges — like security and technical debt at scale — require us to double down on system design and architectural knowledge. Are you ready to stop writing code and start managing it? https://lnkd.in/ejk54gpf #SoftwareEngineering #GenerativeAI #FutureOfWork #AIProgramming #SystemDesign
To view or add a comment, sign in
-
-
When I first came across the Claude code leak news, the developer in me couldn’t resist. There’s always that curiosity Let’s see what they’ve really built. So I did what any dev would do went straight in, cloned, and started exploring. At first glance, things looked impressive. Clean architecture. Well-structured modules. Thoughtful separation of concerns. You could clearly see a team that plans for scale and future iterations not just quick shipping. But as I dug deeper, a different picture started to emerge. There was no AI brain. No model weights. No secret sauce. What I found instead was: - Wrappers - Orchestration layers - Tooling around the model - Integration logic In short everything around the intelligence, not the intelligence itself. And that’s where it gets interesting. Because the way this story unfolded headlines, urgency, takedowns it created the perception of something massive being exposed. But from a developer’s lens, the actual substance felt… controlled. It raises a fair question: Was this truly a critical leak, or a moment that unintentionally (or intentionally) amplified visibility? Either way, one thing is certain It got developers like me to stop, look, and engage. And in today’s attention economy, that alone is powerful. “This is not a source of truth just my perspective as a developer exploring what was available.” #AI #Claude #Developers #TechPerspective #Engineering
To view or add a comment, sign in
-
-
When I first came across the Claude code leak news, the developer in me couldn’t resist. There’s always that curiosity Let’s see what they’ve really built. So I did what any dev would do went straight in, cloned, and started exploring. At first glance, things looked impressive. Clean architecture. Well-structured modules. Thoughtful separation of concerns. You could clearly see a team that plans for scale and future iterations not just quick shipping. But as I dug deeper, a different picture started to emerge. There was no AI brain. No model weights. No secret sauce. What I found instead was: - Wrappers - Orchestration layers - Tooling around the model - Integration logic In short everything around the intelligence, not the intelligence itself. And that’s where it gets interesting. Because the way this story unfolded headlines, urgency, takedowns it created the perception of something massive being exposed. But from a developer’s lens, the actual substance felt… controlled. It raises a fair question: Was this truly a critical leak, or a moment that unintentionally (or intentionally) amplified visibility? Either way, one thing is certain It got developers like me to stop, look, and engage. And in today’s attention economy, that alone is powerful. “This is not a source of truth just my perspective as a developer exploring what was available.” #AI #Claude #Developers #TechPerspective #Engineering
To view or add a comment, sign in
-
Developer productivity is a vanity metric. Yes — I said it. For years we’ve been obsessed with: → velocity → commits → lines of code → story points And now with AI, we can generate 10x more code overnight. So… are we 10x better? Of course not. We just created a new problem: 👉 Uncontrolled code production More code = more cost More code = more risk More code = more entropy And most teams have zero systems to control it. This is the shift nobody is talking about: → The problem is no longer writing code → The problem is governing what gets produced Welcome to the next phase: Code production is becoming autonomous. And without governance, it breaks everything. This changes the role of engineers completely. You're not just writing code anymore. You're responsible for: → what gets generated → how it behaves → how it evolves over time You are becoming a governor of autonomous systems. If you're still optimizing for “developer productivity” you're optimizing for the wrong era. Read this before it hits your team: https://lnkd.in/ecB4MYYc
To view or add a comment, sign in
-
-
AI has broken a lot of people’s brains about software development. Everyone thinks because you can vibe code a prototype in a weekend, you can build a real product the same way. Sometimes you can. Most of the time, you can’t. Can you build something awesome? Yes! But it does take constant care and feeding to retain that 'Awesome' label. Before we build anything, we ask these 3 questions, * How many people is this actually for? * When it breaks, who knows and how do we fix it? * When requirements change next week, does the product bend or break? That’s it. Not 14 architecture diagrams. Not 6 weeks of sprint planning. Not a giant requirements doc nobody reads. Just real answers to the 3 things that actually matter. AI has massively compressed the time from idea to working software. That part is real. What hasn’t changed is that bad assumptions still get expensive fast. The winners won’t be the teams that build the fastest. They’ll be the teams that put the right guardrails around speed. Garbage in, garbage out holds true.
To view or add a comment, sign in
-
🚀 From "Co-pilot" to "Tech Lead": 4 Months with Claude Code After 4 months of heavy production use, I’ve fully adapted to the Claude Code ecosystem. The transformation has redefined my workflow. Here’s the honest difference I felt immediately: Claude Code is agent-first. You describe the goal in natural language, and it takes the wheel. It plans, reads the entire codebase, runs commands, handles multi-file changes, and even manages sub-agents for specialized tasks like refactoring or database updates. The strengths are undeniable: 🧠 Superior Deep Reasoning: It masters complex refactors and architecture where other tools often guess. 🛠️ True Autonomy: I could confidently step away to focus on high-level strategy while it executed the heavy lifting. 🤝 Parallel Work Efficiency: Managing multiple agent teams feels less like prompting and more like coordinating with a senior engineering squad. But it’s important to acknowledge the shift: it isn’t built for speed with quick, inline edits. For micro-tasks, Traditional Inline Edits are still faster. My conclusion? If Cursor felt like an advanced power tool, Claude Code feels like handing off the job to another senior engineer. Curious: How many of you have tried leveraging the full agentic mode of Claude Code yet? Is the autonomy changing how you approach complex builds? Let’s discuss. ⬇️ (Tomorrow, I’ll be dropping a head-to-head performance breakdown comparing it directly with Cursor. Stay tuned.) #ClaudeCode #AgenticAI #AICoding #SoftwareEngineering #TechInnovation
To view or add a comment, sign in
-
-
300,000 lines of code. 10 days. Then I deleted all of it. I do not know how to code. I have never done software development. But with Claude Code, I generated nearly 300,000 lines in a 10-day marathon—only to realize the most basic pipeline was broken, and every tiny change cost a fortune in tokens. It was a mountain of code that looked impressive but created zero value. So I started over. This time, I did not rush to implementation. I spent hours talking through the core problem, writing the MRD and PRD, and designing the architecture before a single line was written. I tested core functions at around 5,000 lines. I questioned every change against the design principles I had written down. Forty-eight hours later, the system worked end-to-end. With less than 10,000 lines of code. Here is what I learned: Every line of code is a liability, not an asset. In the vibe coding era, an AI can build a plausible-looking mountain of code in days. But the larger the codebase, the harder and more expensive it becomes for the AI to maintain. Without discipline, you are not building assets—you are piling up technical debt. Vibe coding is not easy. It is not relaxing while the AI works. It is like managing five developers simultaneously: constant context switching, judgment calls, and course corrections. After six hours, your brain is fried. And most importantly: the AI cannot tell you what truly matters. That question is yours to answer before you start. Read the full article: https://lnkd.in/gBSkxg7K #VibeCoding #AI #SoftwareEngineering #ProductManagement #ClaudeCode
To view or add a comment, sign in
-
Story points are dead. I'll prove it. For 20 years we've estimated software in points — an abstract, unfalsifiable unit everyone quietly knew was theater. It kind of worked when humans wrote 100% of the code. In 2026, agents write the first draft of most production code. The question changed: "How much will this task cost to ship?" And that question now has a REAL answer in dollars — the price of the tokens the model burns getting the work done. It's on the API invoice. Every month. Ignored. So I shipped a proposal for a framework called TokenPoints: estimate work in dollars of inference, not hours or points. → A 6-pillar manifesto → XS–XL sizing scale anchored in USD (calibrated to 2026 agentic reality — a serious Opus session crosses $50 easily) → A 2-sprint calibration playbook → Tracking templates and worked examples (frontend, refactor, debug) → Anti-patterns (including the obvious one: NEVER compare $/task between developers) It's v0.1, open source, CC BY 4.0. It's almost certainly wrong in several places — which is exactly why it's on GitHub. So real teams can use it, contribute anonymized calibration data, and sharpen it together. Repo: https://lnkd.in/dEvz8MS7 EDIT: I guess TokenPoints is a BIS name here #AI #Agile #SoftwareEngineering #LLM #ProductManagement
To view or add a comment, sign in
-
-
Before any code is committed, developers spend hours exploring, debugging, experimenting. None of that appears in your reporting. Think about what actually happens during a typical development session. A developer picks up a task, reads the requirements, and starts navigating the codebase to understand where the change needs to go. That exploration might take 30 minutes or three hours depending on documentation quality and familiarity with the relevant components. Then comes the actual coding, debugging, and iteration before anything is ready to commit. The code gets written, revised, and shaped through a process that is invisible to every tool that operates at or after the commit boundary. This is the inner loop of software development, and it represents roughly 80% of where engineering work actually happens. It is where developers struggle with unclear requirements. It is where AI tools either accelerate delivery or create friction. Measuring only what ships is like evaluating a surgeon's skill by reading the discharge summary. See what CodeTogether captures before the commit https://hubs.ly/Q049RF9q0 #EngineeringIntelligence #SoftwareDevelopment #InnerLoop #DeveloperProductivity #EngineeringLeadership
To view or add a comment, sign in
-
Explore related topics
- How AI Impacts the Role of Human Developers
- How AI Improves Code Quality Assurance
- How to Maintain Code Quality in AI Development
- The Future of Coding in an AI-Driven Environment
- How to Boost Productivity With AI Coding Assistants
- How AI Will Transform Coding Practices
- How to Boost Productivity With Developer Agents
- How to Overcome AI-Driven Coding Challenges
- AI's Impact on Coding Productivity
- How AI Agents Are Changing Software Development
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development