Your code didn’t fail in production. Your process did. Everyone talks about writing clean code. Almost no one talks about how that code actually ships. And that’s where most systems break. In theory, the pipeline looks perfect: → Plan the rollout → Write the code and add tests → Build the artifact → Run every test from unit to e2e → Deploy to production and relax Looks solid, right? Then reality hits. → AI writes a big chunk of your code. → One tiny change breaks production for half your users. → And absolutely no one knows which PR started the fire 💀 Then the real panic starts. Debugging for hours. Slower releases. Everyone staring at their screens with maximum production stress. That’s when it hit me. Shipping code is not a developer problem. It is a system design problem. Because if your pipeline is chaos… even perfect code doesn't stand a chance. #SystemDesign #DevOps #SoftwareEngineering #BackendEngineering #Microservices #TechRealities
Shipping Code is a System Design Problem
More Relevant Posts
-
Our Docker builds took 70 minutes and we thought that was normal. We had a recurring issue in our deployment process. Everyone felt it. No one wanted to deal with it. The flow was simple: A developer opens a PR It gets merged A Docker image builds Then it gets deployed Sounds normal. Until you hit the wait. The image was ~4 GB. Build time was 70+ minutes. Every. Single. Time. That meant: Testers sitting idle, waiting Feedback arriving late Context already lost by the time results came back Developers switching tasks just to stay productive Momentum kept breaking. You fix something → wait an hour You improve something → wait again You just want feedback → wait It adds up faster than you think. Then something changed. One of our teammates decided to dig into the image. After optimization: ~700 MB ~8 minutes Same workflow. Completely different pace. Feedback came back while the work was still fresh. Iteration stopped feeling like friction. Waiting stopped being part of the job. This wasn’t just about reducing image size. It was about removing a constant, invisible drag on the team. Because 70+ minutes per merge across features, fixes, and improvements isn’t just time lost. It’s momentum lost. And most teams don’t notice it until it’s gone. In our case, the fix wasn’t easy to figure out but AI helped make it approachable enough to execute Pro tip: Use multi-stage builds and only ship what was actually needed. If you’ve seen something similar, I’m curious how others are handling it #Docker #DockerImageOptimization #CICD #SofwareEngineering
To view or add a comment, sign in
-
-
𝗢𝗻𝗲 𝗼𝗳 𝗺𝘆 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝘁𝘂𝗿𝗻𝗲𝗱 𝟴𝟬 𝗺𝗶𝗻𝘂𝘁𝗲𝘀 𝗼𝗳 𝗿𝗲𝗹𝗲𝗮𝘀𝗲 𝘄𝗼𝗿𝗸 𝗶𝗻𝘁𝗼 𝟯𝟬 𝘀𝗲𝗰𝗼𝗻𝗱𝘀. Every release I did required me to: 1. Checkout to main and git pull in the service repo 2. Check the latest tag and create the new version 3. Push the tag 4. Wait for the CI to finish (~6 min) 5. Verify the container version in the cloud provider 6. Go to the infra repo, update the version in the config files, and open a PR 7. Wait for the CI to pass (~1 min) 8. Merge the changes 9. Wait for the main branch CI (~4 min) 10. Trigger the final production deploy (~4 min) That's 𝟭𝟬 𝘀𝘁𝗲𝗽𝘀. ~20 minutes per release. Multiply that by 4 repositories — that's 80 minutes of my day, gone. So I built an agentic workflow with Claude Code + Obsidian to handle it all. Obsidian is the brain — it holds the agent instructions, a release template the agent fills in as it goes, and a dated history of every release for full traceability. I started with Claude Code's "Accept Edits" mode — reviewing every step, catching gaps, and refining the instructions in real time. Only after validating it multiple times did I switch to "Bypass Permissions" mode — letting the agent run 95% autonomously. The other 5%? Sensitive actions like merging PRs — the agent pauses and waits for my approval. I also use the /loop command to poll CI pipelines every 2 minutes — the agent monitors builds and lets me know the moment they finish. 𝗡𝗼 𝗺𝗼𝗿𝗲 𝘁𝗮𝗯-𝘀𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 𝘁𝗼 𝗰𝗵𝗲𝗰𝗸 𝗶𝗳 𝘁𝗵𝗲 𝗖𝗜 𝗽𝗮𝘀𝘀𝗲𝗱. The best part? The agent runs multiple releases in parallel — spinning up sub-agents, one per repo. Instead of 80 min of sequential work: 30 seconds of input, ~15 min of autonomous execution. 𝗧𝗵𝗲 𝗯𝗲𝘀𝘁 𝗮𝗴𝗲𝗻𝘁𝗶𝗰 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝗮𝗿𝗲𝗻'𝘁 𝗯𝘂𝗶𝗹𝘁 𝗶𝗻 𝗼𝗻𝗲 𝘀𝗵𝗼𝘁. They're refined through iteration — start supervised, validate, then gradually increase autonomy. What repetitive process are you still doing manually that an agent could handle? #AI #ClaudeCode #DevOps #Automation #AgenticWorkflows #SoftwareEngineering #DeveloperProductivity #Obsidian
To view or add a comment, sign in
-
Automation Without Understanding: Scaling Chaos Efficiently Every tech company eventually develops the same obsession: Automate everything. Standardize everything. Sounds great. Until you ask one simple question: “What exactly are we automating?” Because more often than not… nobody really knows. In the age of KPIs and velocity, there’s no time to stop and ask uncomfortable questions. You’re measured on output, not understanding. So the system does what it’s designed to do: It ships. I once watched a company fail three times at “standardizing CI/CD.” Three attempts. Same result. Always abandoned halfway through. Not because the tools were wrong. Not because the engineers were incompetent. Because there was no single process to automate. Each team lived in its own universe: – some built artifacts after merge, others before – some used main + tags, others staging/prod branches – Java + Maven, Java + Gradle, Python… take your pick – testing? validation? let’s not even go there And yet, the plan was to “unify everything.” 💡 Here’s the uncomfortable truth: You can’t automate chaos. Well… you can. But then you don’t get efficiency. You get faster chaos. Real progress didn’t start with better pipelines. It started with stepping back and asking: “What is the process we actually want?” Because automation is not a strategy. It’s an amplifier. If the system makes sense — it scales. If it doesn’t — it collapses faster. ❓ Curious: Have you seen automation fix a broken process… or just make it fail more efficiently? #EngineeringManagement #DevOps #Automation #SystemsThinking #TechLeadership
To view or add a comment, sign in
-
-
CI is often treated as something optional. Something you add when the project grows, when the team expands, or when things start breaking often enough. But even in small projects, changes accumulate faster than expected. A quick fix here, a dependency update there, a refactor that "should not affect anything." Without CI, every change depends on memory and manual checks. That does not scale even for a single developer. The real problem is not broken builds. It is silent breakage. Code that compiles, deploys, and only later reveals that something no longer works as intended. These issues are expensive because they are discovered late and disconnected from the original change. CI moves that feedback closer to where it belongs. You push code, and within minutes you know if something fundamental is wrong. Tests fail, linters complain, builds break. The signal is immediate, and the fix is still fresh in your head. There is also a discipline effect. When CI exists, people naturally adapt. Smaller commits, clearer boundaries, fewer shortcuts that rely on "I'll fix it later." The system enforces consistency better than any written guideline. If you feel like you "don't need CI," it is usually not about the project. It is more often about avoiding the upfront effort to learn and set it up, or simply not valuing how much time is lost on repetitive manual checks. CI is not as flashy as AI tooling, but it removes a large amount of boring, repeatable work that developers keep doing by hand. That is the core point: CI is about time optimization. Every manual test run, every forgotten step, every avoidable regression is time you are choosing to spend again and again. You do not need a complex setup. A basic pipeline that installs dependencies, runs linters, and executes tests is already enough to catch a large class of problems. The goal is not perfection, but early signal and less repetition. Skipping CI is effectively choosing to debug in production, just later and at a higher cost. #softwaredevelopment #ci #devops #engineering #programming #productivity #automation
To view or add a comment, sign in
-
-
Developers spend 60% of their time not writing code. Setting up environments. Configuring CI/CD. Writing boilerplate. Wiring APIs. Debugging deployment misconfigs. Then finally — actual product. Actual value. Vibe coding attacks that 60%. Describe the outcome. Scaffolding is generated. Schema is structured. APIs are wired. Deployment happens in the same workflow. No separate DevOps layer. No env variables breaking prod at 2am. No three-day setup before a single feature ships. And here's where most developers get it wrong. This isn't low-code with a chat interface. The output is real application logic — structured, readable, extendable. Which means your ability to review it, catch what the system missed, and iterate fast is now the most valuable skill in the room. The bottleneck shifted. Used to be: can you write it fast enough? Now it's: can you describe it precisely enough? Vibe coding doesn't make developers obsolete. It makes slow processes obsolete. #VibeCoding #AIForDevelopers #FutureOfCoding #SoftwareEngineering #AIDevelopment #NoCode #LowCode #DevTools #CodingLife #Automation #AIWorkflow #CodeGeneration #EngineeringCulture #ModernDevelopment
To view or add a comment, sign in
-
"It works on my machine." That phrase has cost companies countless hours of debugging, broken deploys, and frustrated clients worldwide. Every developer has said it. But in real systems — with teams, staging environments, and production — that mindset is expensive. The problem was never the code. It was the environment. When something breaks in production but runs fine locally, it's usually one of these: → Different OS, dependencies, or configs across machines → Missing or poorly defined environment variables → Hardcoded paths and credentials → Library versions no one is tracking → Manual setup that "only Mark knows how to do" The code is right. The system is broken. What separates mature engineering teams: Containerization — Docker ensures dev, staging, and production run the same environment. No surprises. Infrastructure as Code — No "mental setup" that disappears when someone leaves the team. Version control for everything — Not just code. Configs, variables, dependencies. Lock files and dependency management — Controlled updates, not accidental ones. CI/CD with automated validation — The pipeline catches the problem before your users do. Strong engineers don't just write code. They build systems that behave predictably — on any machine, in any environment, at any time. Because in the real world: if it only works on your machine, it doesn't work. Which of these practices is your team still missing? #SoftwareEngineering #DevOps #Backend #Docker #CICD #Engineering #Programming
To view or add a comment, sign in
-
-
Claude Code: what your repo structure says about your workflow We talk a lot about the limits of AI agents. But rarely about what we actually give them to work with. An agent like Claude Code has no memory between sessions. It starts from scratch every time. That's not a flaw — it's its architecture. And it fundamentally changes how we should integrate it into a project. Give it structured context, and you get consistent outputs. Give it nothing, and it does its best with what it has — and "its best" has real limits. That's where the .claude/ directory makes all the difference: CLAUDE.md lays the foundation: stack, conventions, architecture. The onboarding document you should have written anyway — for any new collaborator, human or not. rules/ lets you modularize guidelines by domain (style, testing, API design). Easier to maintain, easier to evolve alongside your codebase. skills/ goes one step further: instead of loading all context upfront, skills are auto-triggered based on task context. Only what's needed, when it's needed — keeping the context window lean and the outputs relevant. hooks/ automates checks: linters, tests, guardrails. Not to control the AI, but to secure the workflow — the same way you would with any other tool. agents/ is where it gets interesting. Specialized sub-agents — security auditor, PR reviewer, deployment checker — each with their own isolated context. Two levels of parallelization: → Within a single prompt: Claude can spawn and orchestrate multiple sub-agents simultaneously, each handling a different task in the same session. → Across Git worktrees: multiple agents running on separate branches at the same time, without stepping on each other. This changes the economics entirely. It's no longer one AI doing one thing at a time — it's a coordinated, parallel workflow where specialization and isolation are built into the repo structure itself. The real question isn't "is AI reliable?" It's "have I built the environment where it can be?" These files, versioned in Git, evolve with your architecture. Neglect them, and they become a liability. Maintain them, and they become leverage. How are you handling context and parallelization with your agents today? #ClaudeCode #SoftwareEngineering #DevOps #AI #CleanCode #GitWorktree
To view or add a comment, sign in
-
-
𝗘𝘃𝗲𝗿𝘆 𝘁𝗲𝗮𝗺 𝘂𝘀𝗶𝗻𝗴 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝗵𝗮𝘀 𝗮 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄. 𝗧𝗵𝗲 𝗾𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗶𝘀 𝘄𝗵𝗲𝘁𝗵𝗲𝗿 𝗶𝘁'𝘀 𝗶𝗻𝘁𝗲𝗻𝘁𝗶𝗼𝗻𝗮𝗹 𝗼𝗿 𝗮𝗰𝗰𝗶𝗱𝗲𝗻𝘁𝗮𝗹. Coding agents on enterprise codebases need a contract. Here it is 👇 ─────────────────── 🔵 𝗣𝗟𝗔𝗡 Define the feature, scope & constraints before the agent writes a single line. The agent has zero business context. You have all of it. 𝘛𝘩𝘪𝘴 𝘴𝘵𝘦𝘱 𝘪𝘴 𝘺𝘰𝘶𝘳𝘴. 𝘕𝘰𝘵 𝘵𝘩𝘦 𝘢𝘨𝘦𝘯𝘵'𝘴. 🟡 𝗗𝗢𝗖𝗨𝗠𝗘𝗡𝗧 Write a Claude.md — codebase conventions, architecture patterns, what to avoid. This is your agent's brain. 𝘚𝘬𝘪𝘱 𝘪𝘵 𝘢𝘯𝘥 𝘺𝘰𝘶'𝘳𝘦 𝘧𝘭𝘺𝘪𝘯𝘨 𝘣𝘭𝘪𝘯𝘥 𝘦𝘷𝘦𝘳𝘺 𝘴𝘪𝘯𝘨𝘭𝘦 𝘵𝘪𝘮𝘦. ⚡ 𝗖𝗢𝗗𝗘 Let it generate. Boilerplate, scaffolding, repetitive logic — it shines here. Complex business logic & edge cases? 𝘐𝘵 𝘭𝘪𝘦𝘴. 𝘊𝘰𝘯𝘷𝘪𝘯𝘤𝘪𝘯𝘨𝘭𝘺. 🔴 𝗥𝗘𝗩𝗜𝗘𝗪 Treat every output like a PR from a brilliant-but-junior dev. Read it. Question it. Simplify it. 𝘠𝘰𝘶𝘳 𝘯𝘢𝘮𝘦 𝘪𝘴 𝘰𝘯 𝘵𝘩𝘦 𝘤𝘰𝘮𝘮𝘪𝘵. 𝘈𝘤𝘵 𝘭𝘪𝘬𝘦 𝘪𝘵. ─────────────────── 🚨 𝟰 𝘀𝗶𝗴𝗻𝘀 𝘆𝗼𝘂𝗿 𝗮𝗴𝗲𝗻𝘁 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗶𝘀𝗻'𝘁 𝗿𝗲𝗮𝗹: ❌ No Claude.md → the agent guesses your architecture every time ❌ Accepting the first output → plausible ≠ correct ❌ Skipping review → mistakes compound silently across 100k+ lines ❌ Waterfall thinking → generate everything, then try to fix it all at once ─────────────────── 🏆 𝗧𝗵𝗲 𝗿𝘂𝗹𝗲: 🔵 Plan → 𝘰𝘸𝘯𝘴 𝘵𝘩𝘦 𝘎𝘖𝘈𝘓 🟡 Document → 𝘰𝘸𝘯𝘴 𝘵𝘩𝘦 𝘊𝘖𝘕𝘛𝘌𝘟𝘛 ⚡ Code → 𝘰𝘸𝘯𝘴 𝘵𝘩𝘦 𝘚𝘗𝘌𝘌𝘋 🔴 Review → 𝘰𝘸𝘯𝘴 𝘵𝘩𝘦 𝘘𝘜𝘈𝘓𝘐𝘛𝘠 Let the contracts blur → your codebase becomes an unmaintainable mess that moves fast and breaks everything. 𝗧𝗵𝗲 𝗮𝗴𝗲𝗻𝘁 𝗱𝗶𝗱𝗻'𝘁 𝗯𝘂𝗶𝗹𝗱 𝗶𝘁. 𝗬𝗼𝘂 𝗱𝗶𝗱. 🎯 ─────────────────── #ClaudeCode #EnterpriseEngineering #CodingAgents #AI
To view or add a comment, sign in
-
-
The old way vs. the new way of building software Old way • PM scopes it, designer mocks it, backend builds the API, frontend wires the UI • Engineering structured like a relay race with constant handoffs • Teams waiting endlessly: for scope sign-off, design handoff, API readiness, reviews, deploys • Five people passing work down a dependency chain, blocked tickets piling up in Jira New Way • One product engineer holds the full loop with AI-powered tooling • Define scope, design interface, build both sides of the stack, stand up infra, ship, and iterate • Full ownership, no queues, no handoff bottlenecks • Deep specialists still matter for scale, security, and systems design—but they're not the default anymore The shift? One engineer with ownership and AI tools > Five people in a dependency chain The bottleneck was never talent. It was fragmentation. AI didn't just make coding faster—it made the old relay race model expensive. How is your team structured today? Still passing the baton, or moving toward full-stack ownership?
To view or add a comment, sign in
-
A month ago I read "Modern Software Engineering" by Dave Farley. And it really hit the spot. 📖 I'm not a big TDD fan — I'll be honest about that. But this book is not really about TDD. It's about something bigger. It's about treating your software as a lab. 🧪 You experiment. You measure. You develop based on data — not opinions, not feelings, not what some senior dev once said was "the right way". Just facts and evidence. The book explains precisely how to design software that is simple to understand, easy to change, and ready to extend. And it does it without any dogma. What I find most exciting is how well this fits the AI-agentic world we are stepping into. 🤖 When AI agents help write, review and refactor code — the structure of your software matters even more. Clean, well-designed code is not just a human need anymore. It's what makes human-AI collaboration actually work. No demagogy. No "trust me bro". Just solid engineering thinking. If you build software — highly recommend. 🚀 #SoftwareEngineering #DaveFarley #ModernSoftwareEngineering #AIEngineering
To view or add a comment, sign in
-
Explore related topics
- How to Build Reliable LLM Systems for Production
- How to Overcome AI-Driven Coding Challenges
- Writing Clean Code for API Development
- How to Write Clean, Error-Free Code
- Coding Best Practices to Reduce Developer Mistakes
- How to Maintain Code Quality in AI Development
- How Developers can Trust AI Code
- How to Improve Your Code Review Process
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Most devs optimize the code. Almost no one audits the pipeline until it's 3am and prod is down.