We are starting to lose the ability to understand the systems we build. The industry is optimizing for delivery speed—by accelerating code generation through AI-assisted tools. Developers are no longer the primary producers of code. They are increasingly validating output generated by something else. A few patterns start to show up: Logic sprawl Code expands faster than structure. The system becomes harder to reason about. Ownership loss Code exists without clear authorship or intent. The mental model is fragmented. Debug opacity Failures require reconstruction, not diagnosis. When something breaks or needs to change, more time is spent figuring out what the code is doing than actually fixing it. The question is no longer how fast code can be generated. It’s whether we are creating systems faster than they can be understood and maintained. #ArtificialIntelligence #SoftwareEngineering #EnterpriseArchitecture #AI #GenAI #SystemDesign #TechLeadership #DigitalTransformation #EngineeringManagement
Losing Control of Systems Built with AI-Generated Code
More Relevant Posts
-
Developers using AI took 19% longer to complete tasks. They believed they were 20% faster. That's from a controlled study by METR. Not a survey. A randomized trial with experienced engineers on their own codebases. They were slower. They felt faster. They couldn't tell the difference. I see this with my own team. Pull requests got bigger. Review cycles got longer. The bug rate didn't drop. The speed moved from writing to debugging. Here's what I think is actually happening: AI didn't break engineering. It exposed where engineering was already weak. My engineers follow specs without asking questions first. They add code when removing code is the answer. They focus on execution and how, which is exactly where AI excels. The what, why, boundaries, systems thinking? Those layers were always getting skipped. AI just made it possible to skip them at twice the speed. 4 things I'm checking with my team this week: → Last 10 AI-assisted PRs. How many bugs caught in review vs production? → Can engineers explain the architecture decision behind their last feature, not just the code? → Code churn over the last 90 days. Did it go up after AI adoption? → Does the current sprint have design docs, or is AI making decisions humans should be making? I wrote the full analysis in this week's Builds That Last: https://lnkd.in/gvAakHU9 What did your team's code quality actually look like after AI adoption? Not what you expected. What you measured. #AIEngineering #SoftwareEngineering #TechLeadership #BuildsThatLast --- Enjoy this? Repost it to your network and follow me for more. Join Builds That Last on Substack for practical insights on foundation-first engineering.
To view or add a comment, sign in
-
Building a lot of software doesn’t just create volume. It exposes patterns. Over time, the same architectural decisions show up in different forms, trade-offs around coupling, data flow, scaling, and system boundaries. That’s where learning actually happens. Not by following patterns, but by understanding why they exist and when they break. AI accelerates this, but it doesn’t replace it. It can generate solutions quickly, but without clarity on the underlying trade-offs, you just move faster toward fragile systems. The constraint isn’t speed. It’s judgement. And that’s still what defines good engineering. #SoftwareArchitecture #SystemsThinking #AI
To view or add a comment, sign in
-
I was working on bug reports recently and ran into something annoying. Not a hard problem… just a repetitive one. Same bug. Different people. Different descriptions. And suddenly you’re not debugging anymore, you’re just figuring out “have I already seen this before?” That got me thinking — this is exactly the kind of thing AI should be good at. So I tried building a small system around it. The idea was simple: instead of jumping straight to “AI fixing bugs”, start with something more fundamental — 👉 can we automatically detect duplicate bug reports? What made it interesting was this: if you only look at the text, you miss context. If you only look at metadata, you miss meaning. So I combined both: • Text (summary + description) → to understand what the bug is 📝 • Metadata (priority, status, etc.) → to understand how it behaves 🏷️ Nothing fancy model-wise, just a Logistic Regression on top of these features. The results were actually pretty satisfying. Text alone worked decently. Metadata alone… not so much. But together, noticeably better 📈 That was the interesting part for me. It made me realize something small but important: a lot of progress in these systems doesn’t come from using bigger models, it comes from combining the right signals. This is obviously just one piece of a much bigger idea, autonomous debugging. But even this small step: • reduces noise • saves time • and makes everything downstream easier ⚙️ Still exploring where this can go next 🚀 #AI #MachineLearning #SoftwareEngineering #DataScience #Debugging #ArtificialIntelligence #Tech #LearningInPublic
To view or add a comment, sign in
-
After couple of discussions with expert people in the industry, I start feeling that the engineers are abusing the AI which gives back a high cost on the company’s shoulder and I’m not really sure about the quality unless I saw a very good human review process to make sure everything in place. Using th AI tool is not now an optional, you have to use it to increase your productivity! BUT if you feel that using the AI making you lazy to even review or modify a single line of code, I do believe you need to rethink again because you are weasting your company resources and your focus and control on the product ! #Thoughts #AI #Software #Development
To view or add a comment, sign in
-
Just completed #AndrewNg's #AgenticAI course on #DeepLearning.AI. It sharpened something I've observed after years of working with LLMs: The challenge isn't getting models to work — it's knowing when they fail, and why. What separates good AI engineers from great ones isn't just understanding patterns. It's evaluation discipline. Anyone can wire up a reflection loop or chain multiple agents together. What's far less common is running a rigorous evaluation cycle, systematically reducing failure modes, and shipping systems that hold up under real-world conditions. Core patterns behind modern agentic systems: #Reflection — Self-critique loops for iterative refinement when first-pass outputs fall short. #ToolUse — Integrating APIs, code execution, and data sources to move from demos to real utility. #Planning — Dynamic task decomposition instead of fixed pipelines, improving robustness in open-ended scenarios. #MultiAgent Systems — Coordinating specialized agents for parallel workflows; success depends heavily on clean handoffs and error boundaries. #ReAct(Reason + Act) — Explicit reasoning before action, followed by observation and iteration. This improves interpretability and creates a natural audit trail. #Adaptive Workflows — Runtime decision-making on which agents to invoke, in what order, and whether to iterate or terminate — enabling systems to handle diverse request types efficiently. The last two go beyond the course — explored through deeper research and practical implementation. Key takeaways I'm applying: 1. Build → Evaluate → Analyze is the real development loop. Great systems are iterated into reliability. 2. At scale, reducing error rates by 1% often matters more than adding new features. 3. Studying other engineers' prompts is one of the fastest ways to improve — prompting is a craft. If you're building AI systems without a disciplined evaluation process, there's a significant gap between current performance and what's achievable. For working implementations of all six patterns, including ReAct and Adaptive Workflows: https://lnkd.in/ghnygPDN #AgenticAI #AIEngineering #GenerativeAI #Agents #DeepLearningAI #AIAgents
To view or add a comment, sign in
-
-
He built it in 5 minutes. It showed. A client came to me proud of a site he threw together with AI. I told him it was garbage. He asked why? I explained. Two web experts in the room agreed with me. He runs every question through AI. Every decision. Every doubt. The tool became the authority. 19 years of engineering taught me one thing AI can't replicate: knowing when the output is wrong. AI doesn't know your users. Doesn't know your codebase. Doesn't know what "good" looks like in context. It generates. You judge. And when prod goes down, when the client loses money, when the layout breaks in 14 different ways — AI doesn't take the call. You do. The people who will always have value aren't the ones who use AI. They're the ones who can tell when AI is confidently wrong. Have you been in that room? I'd like to hear it. #AI #SoftwareEngineering #TechLeadership #AITools #WebDevelopment #CriticalThinking #Engineering
To view or add a comment, sign in
-
-
The real power move in using AI? Building a system around the AI. This Claude Code Workflow Cheatsheet nails it: → Persistent memory with `CLAUDE.md` → Reusable skills for code review, testing, deployment → Hooks for automating actions → Layered architecture for context control → Daily workflows that reduce chaos Translation: Instead of prompting from scratch every day, you train your environment to think with you. That’s where 10x leverage happens. The best developers in 2026 won’t just write code faster. They’ll build AI operating systems for their projects. If you’re still using AI one prompt at a time, you’re already behind. What’s one workflow you’d automate first? 👇 #AIEngineering #ClaudeCode #BuildInPublic #DeveloperTools
To view or add a comment, sign in
-
-
Your test suite might be more valuable than your actual code. Here's a perspective shift that's been on my mind: If you lost your entire codebase tomorrow but retained comprehensive test coverage, an AI agent could theoretically reverse engineer and recreate the code. Why? Because tests provide a perfect feedback loop: → Tests define expected behavior → AI writes code to satisfy those expectations → Tests validate the output → Iterate until all tests pass High code coverage = high repeatability of rebuilding the system. This fundamentally changes how we should think about: • Technical debt prioritization • Disaster recovery planning • The ROI of comprehensive testing • Documentation through executable specifications In the age of AI-assisted development, your test suite isn't just quality assurance—it's an executable blueprint of your entire system. Food for thought: Are we investing enough in test coverage as a form of institutional knowledge preservation? #SoftwareEngineering #AI #TestDrivenDevelopment
To view or add a comment, sign in
-
AI Agents — Built the Way Humans Work!! When designing AI agents, the best reference isn’t software architecture. It’s how humans get things done. A person runs a constant loop: observe → think → act → remember → improve. That’s exactly how agents are engineered around a Large Language Model. - Observe → Inputs from text, images, APIs, data - Think → LLM reasons over context - Act → Tools, code, API calls - Remember → Memory store, history, embeddings - Improve → Feedback, evaluation, retries - Intent → Goals defined in the system prompt An LLM alone is just a brain. Agency appears when we recreate the human working loop around it. Don’t ask, “What tools should this agent have?” Ask, “What would a human need to do this task?” Design that—and you’ve designed your agent. Intelligence lives in the model. Agency lives in the system around it. #AI #Agents #LLM #AgentDesign #SystemsThinking #SystemDesign #AIInfrastructure #AIEngineering #GenerativeAI #MLSystems
To view or add a comment, sign in
Explore related topics
- How AI Impacts the Role of Human Developers
- How AI Will Transform Coding Practices
- How AI Assists in Debugging Code
- How AI is Changing Software Delivery
- How Developers can Adapt to AI Changes
- How AI Agents Are Changing Software Development
- The Future of Coding in an AI-Driven Environment
- Latest Trends in AI Coding
- How to Overcome AI-Driven Coding Challenges
- How AI Improves Code Quality Assurance
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Because we are in an early level of maturity of AI, it would be wise to formulate the principles we use to evaluate the code for maintainability before we release it. While the evaluation will take additional time, it could save the down time that Edward has highlighted.