Developers shipped 76% more code in 2025 than the year before. And software got worse. Lines of code per developer nearly doubled — from 4,450 to 7,839. Median PR sizes jumped 33%. Every metric that measures output went up. Meanwhile, software outages have climbed steadily since 2022. Not a coincidence. Here's what's happening: AI coding tools made it trivially easy to produce code. So everyone produced more of it. But nobody changed the incentive structure. Developers still get rewarded for shipping features, not for writing code that doesn't break six months later. The counterintuitive part — and the part most teams are missing — is that good code is actually cheaper to generate with AI than bad code. Simple, well-structured modules need fewer tokens to maintain, fewer tokens to debug, fewer tokens to extend. Complexity compounds in token cost the same way it compounds in human hours. John Ousterhout called this decades ago in A Philosophy of Software Design: complexity is the root cost of all software. AI didn't change that equation. It amplified it. The companies that figure this out first will spend less on compute, ship more reliable products, and move faster than competitors drowning in their own AI-generated spaghetti. More output was never the goal. Better systems were. #AI #SoftwareEngineering #CodeQuality #DevProductivity #StartupLife #TechLeadership #AgenticAI Join Agentic Engineering Club → t.me/villson_hub
Dmytro Diachenko’s Post
More Relevant Posts
-
Most developers look at legacy code and immediately think, this needs to be rewritten. That’s usually the wrong instinct. Legacy code is often just software that was built under real constraints — tight deadlines, changing requirements, older tools, and pressure to ship something that works. If it has survived in production, it already carries value. The real engineering skill is not rewriting from scratch, it’s understanding the system well enough to improve it safely. A practical way to approach it: • avoid full rewrites unless absolutely necessary • add tests before touching critical logic • improve readability incrementally through better naming and small refactors This is also where AI is becoming genuinely useful. AI can help explain unfamiliar code, suggest refactors, and even generate initial test cases, which significantly reduces the time needed to understand older codebases. But this is the important part: AI still does not understand business context, hidden dependencies, or why certain “weird” decisions were made years ago. That judgment still belongs to engineers. In real-world software engineering, most work is not greenfield development. It’s improving systems that already exist. The engineers who stand out are not the ones who rewrite everything. They are the ones who can make complex systems better without breaking production. Legacy code is not just old code. It’s accumulated product knowledge, business logic, and engineering decisions in code form. Treat it as a system to evolve, not a mess to replace. #SoftwareEngineering #LegacyCode #Refactoring #CleanCode #AIinTech #DeveloperProductivity #SystemDesign #TechCareers #LinkedInTech
To view or add a comment, sign in
-
-
Your team shipped 76% more code last year. Your outages also went up. Developers wrote an average of 7,839 lines of code in 2025, up from 4,450 the year before. Median PR size jumped 33%. Files got 20% denser. And according to an analysis of vendor status pages, system outages have climbed steadily since 2022. We got faster at producing code. We did not get faster at producing working software. There's a popular narrative that AI-assisted development will inevitably drown us in low-quality slop. More code, worse code, forever. But that ignores how markets actually work. Generating clean, simple code costs fewer tokens than generating tangled messes. Maintaining readable code is cheaper than debugging spaghetti. The economics point toward quality, not away from it. John Ousterhout nailed it years ago: complexity is the primary enemy of software. Good code is simple and modifiable. Bad code demands context that no one — human or AI — wants to carry. Right now we're in the messy middle. The incentive structure rewards shipping fast: users get features, model providers bill tokens, developers skip review. But competition among AI models will eventually punish the ones that produce expensive-to-maintain output. The real question is not whether AI code will be good. It's whether engineering teams will have the discipline to demand it before the market forces them to. #AI #SoftwareEngineering #CodeQuality #DeveloperProductivity #AITools #StartupLife #TechLeadership Join Agentic Engineering Club → t.me/villson_hub
To view or add a comment, sign in
-
-
AI has broken a lot of people’s brains about software development. Everyone thinks because you can vibe code a prototype in a weekend, you can build a real product the same way. Sometimes you can. Most of the time, you can’t. Can you build something awesome? Yes! But it does take constant care and feeding to retain that 'Awesome' label. Before we build anything, we ask these 3 questions, * How many people is this actually for? * When it breaks, who knows and how do we fix it? * When requirements change next week, does the product bend or break? That’s it. Not 14 architecture diagrams. Not 6 weeks of sprint planning. Not a giant requirements doc nobody reads. Just real answers to the 3 things that actually matter. AI has massively compressed the time from idea to working software. That part is real. What hasn’t changed is that bad assumptions still get expensive fast. The winners won’t be the teams that build the fastest. They’ll be the teams that put the right guardrails around speed. Garbage in, garbage out holds true.
To view or add a comment, sign in
-
𝐓𝐡𝐞 𝐞𝐚𝐬𝐢𝐞𝐬𝐭 𝐭𝐢𝐦𝐞 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐢𝐬 𝐚𝐥𝐬𝐨 𝐭𝐡𝐞 𝐞𝐚𝐬𝐢𝐞𝐬𝐭 𝐭𝐢𝐦𝐞 𝐭𝐨 𝐛𝐮𝐢𝐥𝐝 𝐠𝐚𝐫𝐛𝐚𝐠𝐞. Today, anyone can build. AI writes code, frameworks handle complexity, and tools automate most of the work. What once took weeks can now be done in hours. Sounds like progress. But is it? We are shipping faster than ever, but not necessarily building better systems. More features, more bugs, more complexity and less thinking. Because building is no longer the hard part. Thinking is. Understanding the problem, designing the system, handling trade-offs, and planning for scale remain incredibly hard. And now, they matter even more. AI made development faster. But it also made it easier to skip fundamentals. The barrier to entry has dropped. The bar for quality has not. In fact, it is higher than ever. Because now, everyone can build. Only a few can build well. 𝐒𝐩𝐞𝐞𝐝 𝐛𝐮𝐢𝐥𝐝𝐬 𝐩𝐫𝐨𝐝𝐮𝐜𝐭𝐬. 𝐓𝐡𝐢𝐧𝐤𝐢𝐧𝐠 𝐛𝐮𝐢𝐥𝐝𝐬 𝐬𝐲𝐬𝐭𝐞𝐦𝐬. #SoftwareEngineering #BuildInPublic #TechCareers #EngineeringMindset #CleanCode #FutureOfTech
To view or add a comment, sign in
-
-
AI is a world-class sprinter, but it has no sense of direction. I recently watched an incredible tech talk by Matt Pocock, and it confirmed something I’ve felt after years in the industry "AI won’t save messy software. Only engineering fundamentals will." We’ve all seen the demos. You type a prompt, and poof an app appears. It feels like magic. But as Matt says: "AI doesn't make code cheap; it makes code faster to produce, but code is always an investment you have to live with." If you ask an AI to build a house without a blueprint, it starts laying bricks immediately. You get a beautiful front door, but no hallway to the kitchen. Here’s how we keep the "human" in the driver’s seat: 1. The "Grill Me" Phase Don't just give orders. Tell the AI: "Interview me relentlessly until you understand every edge case of this project." Align the "why" before you touch the "how." 2. Speak a Shared Language (DDD) If you and the AI don't agree on what a "User" vs. a "Subscriber" is, you’re headed for a world of bugs. Use a shared dictionary of terms. 3. Don't let AI "Outrun its Headlights" AI is overconfident. It will try to write 500 lines at once. Use Test-Driven Development (TDD) to force it to take small, verifiable steps. 4. Build "Deep" Modules Think of an iPhone a simple interface hiding massive power. Make your code simple on the outside so the AI (and your future self) doesn't get lost in the complexity. The Bottom Line: AI is your Tactical Programmer (the grunt work). You are the Strategic Engineer (the architect). Your job isn’t to be a code monkey anymore it’s to be a systems architect who treats code like a long-term investment, not a disposable commodity. Are we losing the "art" of engineering to the speed of AI, or is this just the ultimate power tool? Let's discuss below. #SoftwareEngineering #AI #WebDevelopment #Coding #MattPocock #CleanCode #TechLeadership
To view or add a comment, sign in
-
Applying advanced AI coding tools to client projects and personal side ventures mirrors the effort of building substantial codebases – often ranging from 80,000 to 100,000 lines. The efficiency gains are remarkable. What once required six to eight months of a team's effort can now be achieved much faster, especially when integrated with robust test frameworks. Tools like Claude Code are not just improving workflows; they're redefining what's possible in software development, offering unparalleled power and resilience. #AICoding #SoftwareDevelopment #TechInnovation #DeveloperTools #ClaudeAI
To view or add a comment, sign in
-
The real skill in 2026 is not coding; it’s reviewing AI code. We are entering a phase where writing code is becoming the easiest part of software development. AI tools can now: • Generate full features in seconds • Write boilerplate faster than any developer • Suggest optimizations and fixes in real-time But here’s the reality most people are missing: AI doesn’t remove the need for developers; it increases the importance of judgment. Because AI-generated code still needs someone to answer: • Is this solution scalable or just “working for now”? • Does this align with system architecture? • What edge cases is AI missing? • Is this secure, maintainable, and production-ready? That’s where the real skill shift is happening. We are moving from: Writing code → Reviewing systems Building logic → Validating intelligence Implementing features → Making engineering decisions In 2026 and beyond, the most valuable developers won’t be the ones who type the fastest. They will be the ones who: • Understand systems deeply • Question AI output intelligently • Spot risks before production does • Turn AI suggestions into reliable software AI is becoming the co-pilot. But the developer is still the one responsible for landing the plane. #Dunify #Reviewthecode #ModernSkill #AIcopilot #Skill2026
To view or add a comment, sign in
-
-
Title: Why f(x) = y Fails in Software (And How to Fix It) In pure mathematics, $f(x) = y$ is a beautiful certainty. You provide an input, and you get a guaranteed output. But in Software Product Development, that formula looks more like: f(a, b, c, ... z) = ? Where: * a = Infrastructure stability * b = Design consistency * c = Developer experience * ...z = The "Chaos Factor" (turnaround times, data latency, environment shifts) When our "formula" has too many variables, we don't get a product; we get Entropic Debt. Testing becomes a nightmare, and glitches become inevitable. ## 🧠 The Missing Variable: "C" (Context) The biggest challenge for both modern engineering and Generative AI today is Context. We often try to bake context directly into our logic, turning our code into "spaghetti variables." If we want software that works as expected, we need to stop building "Mega-Functions" and start moving toward Math-like Certainty. ## 🛠️ The Strategy: Turn Variables into Constants To achieve quality at every layer, we must decompose the process: 1️⃣ Atomic Logic 🧩 Break the "Mega-Function" into "Pure Functions." A function should do one thing. If you give it Input A, it must return Output B—every single time. No side effects. No surprises. 2️⃣ Freeze the Context ❄️ Treat Context as a Constant Configuration, not a shifting variable. Whether it’s through Docker for environments or strict Design Tokens for UI, "freezing" the surroundings allows the core logic to run in a controlled vacuum. 3️⃣ Layered Quality 🧱 By simplifying the formula at each layer, testing becomes "easy." You aren’t testing the whole world; you’re testing a series of small, undeniable mathematical truths. ## 🚀 The New Standard: f(x) | _C = y The goal isn't to eliminate complexity—it's to isolate it. * Less Variables. * More Constants. * Clean Formulation. When we reduce the "noise" of external factors, we don't just build software that works; we build software that is predictable. #SoftwareEngineering #ProductDevelopment #AIContext #CleanCode #TechLeadership #Mathematics
To view or add a comment, sign in
-
-
300,000 lines of code. 10 days. Then I deleted all of it. I do not know how to code. I have never done software development. But with Claude Code, I generated nearly 300,000 lines in a 10-day marathon—only to realize the most basic pipeline was broken, and every tiny change cost a fortune in tokens. It was a mountain of code that looked impressive but created zero value. So I started over. This time, I did not rush to implementation. I spent hours talking through the core problem, writing the MRD and PRD, and designing the architecture before a single line was written. I tested core functions at around 5,000 lines. I questioned every change against the design principles I had written down. Forty-eight hours later, the system worked end-to-end. With less than 10,000 lines of code. Here is what I learned: Every line of code is a liability, not an asset. In the vibe coding era, an AI can build a plausible-looking mountain of code in days. But the larger the codebase, the harder and more expensive it becomes for the AI to maintain. Without discipline, you are not building assets—you are piling up technical debt. Vibe coding is not easy. It is not relaxing while the AI works. It is like managing five developers simultaneously: constant context switching, judgment calls, and course corrections. After six hours, your brain is fried. And most importantly: the AI cannot tell you what truly matters. That question is yours to answer before you start. Read the full article: https://lnkd.in/gBSkxg7K #VibeCoding #AI #SoftwareEngineering #ProductManagement #ClaudeCode
To view or add a comment, sign in
-
Developers feel 20% more productive with AI-generated code. Data shows they are actually 19% slower. This 39-point gap is an important figure in software development today. By 2026, 51% of all code on GitHub will be AI-assisted. We are releasing features faster, but human review times for pull requests have tripled. SD Times calls this the "2026 Quality Collapse," and I think that fits well. Here’s what’s happening: AI writes code quickly, but humans take their time to review it. This only makes sense if teams trust the code without fully understanding it. Most teams do, as slowing down would make them less efficient. So, they commit the code. It works in testing and staging, but 60 days later, it fails in production. This happens because the team does not fully understand the logic behind the AI-generated code. One developer shared that he had to rewrite 60% of the code produced by an AI agent on a recent project. He didn’t do this because the code was wrong, but because it passed tests while violating long-term design principles that showed up under heavy use. The role of senior developer has changed; they are no longer the main authors but now act as "guardrail managers." Research shows that 48% of AI-generated code has security vulnerabilities. By 2027, up to 30% of new security problems may come from AI-generated logic that hasn’t been thoroughly reviewed. We promised increased speed, and we delivered. However, the codebase wasn’t informed. Here are three key questions to CTOs and engineering leads: 1. How much of your current codebase was generated by AI and not reviewed by someone who understood it? 2. Do your developers feel productive, or are they truly productive? 3. When technical debt appears, who in your organization will have enough context to fix it? Link to article in comments. If you want to hear more news, click follow. #SoftwareDevelopment #AI #CTO #EngineeringLeadership #CodeQuality #AIinDev
To view or add a comment, sign in
-
Explore related topics
- How AI Agents Are Changing Software Development
- How AI Impacts the Role of Human Developers
- Benefits of AI in Software Development
- How AI is Changing Software Delivery
- How AI Improves Code Quality Assurance
- How to Boost Productivity With Developer Agents
- How AI Will Transform Coding Practices
- How to Boost Productivity With AI Coding Assistants
- AI's Impact on Coding Productivity
- How to Overcome AI-Driven Coding Challenges
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development