Claude Code: what your repo structure says about your workflow We talk a lot about the limits of AI agents. But rarely about what we actually give them to work with. An agent like Claude Code has no memory between sessions. It starts from scratch every time. That's not a flaw — it's its architecture. And it fundamentally changes how we should integrate it into a project. Give it structured context, and you get consistent outputs. Give it nothing, and it does its best with what it has — and "its best" has real limits. That's where the .claude/ directory makes all the difference: CLAUDE.md lays the foundation: stack, conventions, architecture. The onboarding document you should have written anyway — for any new collaborator, human or not. rules/ lets you modularize guidelines by domain (style, testing, API design). Easier to maintain, easier to evolve alongside your codebase. skills/ goes one step further: instead of loading all context upfront, skills are auto-triggered based on task context. Only what's needed, when it's needed — keeping the context window lean and the outputs relevant. hooks/ automates checks: linters, tests, guardrails. Not to control the AI, but to secure the workflow — the same way you would with any other tool. agents/ is where it gets interesting. Specialized sub-agents — security auditor, PR reviewer, deployment checker — each with their own isolated context. Two levels of parallelization: → Within a single prompt: Claude can spawn and orchestrate multiple sub-agents simultaneously, each handling a different task in the same session. → Across Git worktrees: multiple agents running on separate branches at the same time, without stepping on each other. This changes the economics entirely. It's no longer one AI doing one thing at a time — it's a coordinated, parallel workflow where specialization and isolation are built into the repo structure itself. The real question isn't "is AI reliable?" It's "have I built the environment where it can be?" These files, versioned in Git, evolve with your architecture. Neglect them, and they become a liability. Maintain them, and they become leverage. How are you handling context and parallelization with your agents today? #ClaudeCode #SoftwareEngineering #DevOps #AI #CleanCode #GitWorktree
Colin Bouvry’s Post
More Relevant Posts
-
🚀 What if your dev team never slept? We just published the AgentFlow Roadmap a full visual guide to our open-source autonomous AI dev team that takes GitHub issues and turns them into merged PRs without human intervention. Here's what the team looks like: 🧠 NEXUS — Orchestrator. Discovers issues, assigns work, recovers from crashes, approves dangerous commands. 🔨 FORGE — Builder. Spawns Claude Code, implements the solution in an isolated worktree, opens PRs. 🔍 SENTINEL — Reviewer. Reviews plans, evaluates code segments, enforces test coverage and security. 🚢 VESSEL — DevOps. Polls CI, squash-merges PRs, handles conflict rework directly with FORGE. 📝 LORE — Documenter. Writes ADRs, changelogs, and project documentation. All built in Rust + Tokio. Connected through a shared state store (Redis in prod). Routed by a cyclic flow engine where each agent returns an action and the graph determines what happens next. The roadmap covers: → Foundation layer (PocketFlow Core, multi-provider LLM client, GitHub REST API) → All 5 agents with their responsibilities, decision priority, and failure recovery → Flow graph + routing table + ticket/worker lifecycles → Plugin architecture with 37 skills, 11 commands, and per-agent hooks → Per-agent model routing via LiteLLM (Claude for coding, Gemini for review, Groq for DevOps — cost-optimized) → What's coming: milestone-aware sprint reviews, AgentFlow Hub marketplace, one-command install Key design decision I'm proud of: VESSEL routes merge conflicts directly back to the same FORGE worker that created the PR — same worktree, same context, no NEXUS round-trip. Fast recovery. Check it out → https://lnkd.in/eQnmfjXF We're building this in the open. Contributions welcome. #AgentFlow #AutonomousAgents #AI #Rust #OpenSource #DevOps #LLM #SoftwareEngineering #AgenticAI
To view or add a comment, sign in
-
Developers feel 20% more productive with AI-generated code. Data shows they are actually 19% slower. This 39-point gap is an important figure in software development today. By 2026, 51% of all code on GitHub will be AI-assisted. We are releasing features faster, but human review times for pull requests have tripled. SD Times calls this the "2026 Quality Collapse," and I think that fits well. Here’s what’s happening: AI writes code quickly, but humans take their time to review it. This only makes sense if teams trust the code without fully understanding it. Most teams do, as slowing down would make them less efficient. So, they commit the code. It works in testing and staging, but 60 days later, it fails in production. This happens because the team does not fully understand the logic behind the AI-generated code. One developer shared that he had to rewrite 60% of the code produced by an AI agent on a recent project. He didn’t do this because the code was wrong, but because it passed tests while violating long-term design principles that showed up under heavy use. The role of senior developer has changed; they are no longer the main authors but now act as "guardrail managers." Research shows that 48% of AI-generated code has security vulnerabilities. By 2027, up to 30% of new security problems may come from AI-generated logic that hasn’t been thoroughly reviewed. We promised increased speed, and we delivered. However, the codebase wasn’t informed. Here are three key questions to CTOs and engineering leads: 1. How much of your current codebase was generated by AI and not reviewed by someone who understood it? 2. Do your developers feel productive, or are they truly productive? 3. When technical debt appears, who in your organization will have enough context to fix it? Link to article in comments. If you want to hear more news, click follow. #SoftwareDevelopment #AI #CTO #EngineeringLeadership #CodeQuality #AIinDev
To view or add a comment, sign in
-
-
Closing the gap between AI demos and production-quality codebases. There is a significant gap between building a quick AI demo and maintaining a production-quality codebase. AI-generated code is rarely tested or documented by default. The architecture is often ad hoc rather than intentional. As you add features, the system becomes increasingly difficult to maintain. We built Codev OS (https://codevos.ai/) to help close that gap. Codev is an operating system for humans and AI agents to build production-quality code together. It’s the layer that augments agent harnesses like Claude Code. It replaces the "honor system" of prompting with a deterministic state machine that enforces the rigor required to mitigate unintended consequences in complex systems. The Architecture of Discipline ✅ The Architect-Builder Pattern: You work with an Architect agent to define specifications and implementation plans. Builder agents then execute these plans in isolated git worktrees, ensuring the human remains the director of the system. ✅ Multi-Model Consultation: Every phase is reviewed by three independent models ( Claude, Google's Gemini, and OpenAI's Codex). During our 2.0 sprint, no single model caught more than 55% of the bugs; the combined consensus caught security-critical flaws—including an SSRF bypass—before they could ship. ✅ Context as Code: Specs and plans are version-controlled alongside the source code. This hierarchy enables progressive disclosure, meaning that a new builder agent understands the architecture and intent before it ever touches a file. ✅ Enforced Protocols: Using the Porch orchestrator, Codev ensures that agents cannot skip the specification, planning, or testing phases. In our head-to-head testing, this methodology produced 2.9x more test lines and a significantly higher "deployment readiness" score. The role of the software engineer is evolving from a hands-on coder to a system architect. CodevOS is built to support that shift, providing the framework to manage autonomous agents with the same rigor you’d apply to an elite human engineering team. Explore the open-source repo and the technical tour: https://lnkd.in/g9aJyJrW #CodevOS #OpenSource #AgenticAI
To view or add a comment, sign in
-
For all my dev friends, make sure to check out Codev OS, which is doing some super cool stuff, closing the gap between building a quick AI demo and maintaining a production-quality codebase.
Closing the gap between AI demos and production-quality codebases. There is a significant gap between building a quick AI demo and maintaining a production-quality codebase. AI-generated code is rarely tested or documented by default. The architecture is often ad hoc rather than intentional. As you add features, the system becomes increasingly difficult to maintain. We built Codev OS (https://codevos.ai/) to help close that gap. Codev is an operating system for humans and AI agents to build production-quality code together. It’s the layer that augments agent harnesses like Claude Code. It replaces the "honor system" of prompting with a deterministic state machine that enforces the rigor required to mitigate unintended consequences in complex systems. The Architecture of Discipline ✅ The Architect-Builder Pattern: You work with an Architect agent to define specifications and implementation plans. Builder agents then execute these plans in isolated git worktrees, ensuring the human remains the director of the system. ✅ Multi-Model Consultation: Every phase is reviewed by three independent models ( Claude, Google's Gemini, and OpenAI's Codex). During our 2.0 sprint, no single model caught more than 55% of the bugs; the combined consensus caught security-critical flaws—including an SSRF bypass—before they could ship. ✅ Context as Code: Specs and plans are version-controlled alongside the source code. This hierarchy enables progressive disclosure, meaning that a new builder agent understands the architecture and intent before it ever touches a file. ✅ Enforced Protocols: Using the Porch orchestrator, Codev ensures that agents cannot skip the specification, planning, or testing phases. In our head-to-head testing, this methodology produced 2.9x more test lines and a significantly higher "deployment readiness" score. The role of the software engineer is evolving from a hands-on coder to a system architect. CodevOS is built to support that shift, providing the framework to manage autonomous agents with the same rigor you’d apply to an elite human engineering team. Explore the open-source repo and the technical tour: https://lnkd.in/g9aJyJrW #CodevOS #OpenSource #AgenticAI
To view or add a comment, sign in
-
Codev OS, the human-AI co-development operating system for creating production quality code now has its own Linkedin page if you'd like to follow along.
Closing the gap between AI demos and production-quality codebases. There is a significant gap between building a quick AI demo and maintaining a production-quality codebase. AI-generated code is rarely tested or documented by default. The architecture is often ad hoc rather than intentional. As you add features, the system becomes increasingly difficult to maintain. We built Codev OS (https://codevos.ai/) to help close that gap. Codev is an operating system for humans and AI agents to build production-quality code together. It’s the layer that augments agent harnesses like Claude Code. It replaces the "honor system" of prompting with a deterministic state machine that enforces the rigor required to mitigate unintended consequences in complex systems. The Architecture of Discipline ✅ The Architect-Builder Pattern: You work with an Architect agent to define specifications and implementation plans. Builder agents then execute these plans in isolated git worktrees, ensuring the human remains the director of the system. ✅ Multi-Model Consultation: Every phase is reviewed by three independent models ( Claude, Google's Gemini, and OpenAI's Codex). During our 2.0 sprint, no single model caught more than 55% of the bugs; the combined consensus caught security-critical flaws—including an SSRF bypass—before they could ship. ✅ Context as Code: Specs and plans are version-controlled alongside the source code. This hierarchy enables progressive disclosure, meaning that a new builder agent understands the architecture and intent before it ever touches a file. ✅ Enforced Protocols: Using the Porch orchestrator, Codev ensures that agents cannot skip the specification, planning, or testing phases. In our head-to-head testing, this methodology produced 2.9x more test lines and a significantly higher "deployment readiness" score. The role of the software engineer is evolving from a hands-on coder to a system architect. CodevOS is built to support that shift, providing the framework to manage autonomous agents with the same rigor you’d apply to an elite human engineering team. Explore the open-source repo and the technical tour: https://lnkd.in/g9aJyJrW #CodevOS #OpenSource #AgenticAI
To view or add a comment, sign in
-
Most developers use Claude Code like a smarter autocomplete. That's leaving a lot on the table. I wrote a deep dive on the skills and subagents that actually make it powerful — things like: → schedule: background agents that run even when you're offline → Explore: read-only codebase navigation with zero risk → Plan: architecture and trade-offs before you write a single line → simplify: because AI over-engineers, and this fixes that The real shift isn't faster code generation. It's moving from reactive debugging to orchestrated, agentic workflows. #ClaudeCode #DeveloperTools #DevOps #AI #SoftwareEngineering
To view or add a comment, sign in
-
𝐖𝐞 𝐛𝐮𝐢𝐥𝐭 𝐚 𝐧𝐢𝐠𝐡𝐭-𝐬𝐡𝐢𝐟𝐭 𝐀𝐈 𝐞𝐧𝐠𝐢𝐧𝐞𝐞𝐫 𝐭𝐡𝐚𝐭 𝐟𝐢𝐱𝐞𝐬 𝐛𝐮𝐠𝐬 𝐰𝐡𝐢𝐥𝐞 𝐰𝐞 𝐬𝐥𝐞𝐞𝐩 At <VidSeeds.ai>, we stopped waking up to alerts. We developed an internal tool — an Autonomous Error Review & Remediation Engine — a pipeline that finds and fixes production issues every night, without human involvement. The setup: Three times per night, the system collects errors from 5 sources — Sentry, Kubernetes logs (prod + staging), GitHub Actions, cluster events, and git history. Then it launches 4 AI models in parallel — each independently investigates the issues from its own angle. Models don’t see each other’s findings. No groupthink by design. A separate session acts as arbiter — cross-examines all 4 analyses, resolves contradictions, and implements only the fixes where multiple models agree. After that — unit tests, E2E tests, CI build, staging deployment, production health checks. If anything fails, the system reruns the full “investigate → fix → verify” cycle up to two more times. In practice: ∙ Morning starts with yesterday’s Sentry alerts already resolved ∙ Bugs that used to sit for a day or two get fixed overnight ∙ Developers spend time on features instead of firefighting ∙ A live dashboard shows exactly what was done and why The honest takeaway: A single model gets things wrong more often than you’d expect. We tried it. Two models just disagree with each other. Four models with an arbiter cross-checking their work — that actually holds up. The error rate dropped enough that we trust the pipeline to commit and deploy without waking anyone up. We’re not automating developers out. We automated the 2 AM pager duty. Team <VidSeeds.ai> #AIEngineering #DevOps #MultiModelAI #StartupTools #BuildInPublic
To view or add a comment, sign in
-
-
📅 Day 10 — 30 Days of Agentic AI AI agents are hitting software development hard. And I don't mean "GitHub Copilot autocomplete" hard. I mean the actual SDLC is being restructured. Here's what's changing: 𝗣𝗹𝗮𝗻𝗻𝗶𝗻𝗴: An agent sits in your stakeholder meeting, listens, and auto-generates Jira tickets with acceptance criteria. Your PM reviews, not writes. 𝗖𝗼𝗱𝗶𝗻𝗴: Developers describe the problem. The agent analyzes the repo, suggests an architecture, and scaffolds the changes across multiple files. 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: Agent reads your new code, writes unit + integration tests automatically, flags edge cases you missed. 𝗜𝗻𝗰𝗶𝗱𝗲𝗻𝘁 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲: Agent monitors logs in real time, detects anomalies, traces root cause, and pings on-call with a summary + proposed fix — before you even get the alert. None of this is theory. Teams are running this in production right now. The junior dev who only does boilerplate? That role is gone. The senior engineer who designs systems and reviews AI output? Never been more valuable. The shift isn't "AI vs. developers." It's "developers who use agents vs. developers who don't." Which side are you building skills on? #AgenticAI #SoftwareEngineering #FutureOfWork
To view or add a comment, sign in
-
Git worktrees are becoming more an more important in my workflow. It helps me to isolate AI generated code. Each agent gets its own worktree. No merge conflicts, clean integration. --- #AICodingAssistants #AIEngineering #FutureOfWork #SoftwareEngineering
To view or add a comment, sign in
-
The real bottleneck in AI-first development isn't the code. It's everything between the code and the customer. We can write features faster than ever. But code review, QA, and deployment. None of it was built for this pace. The infrastructure around shipping was designed for a slower era and it shows. It gets harder when your product is open-ended. We build architecture software, for everyone everywhere. Taking the complexity out of legacy tools. A floor plan isn't right or wrong the way a login page is. QA is genuinely complex when the output is creative. There's no simple test suite for "does this home make sense." Every AI-first team is hitting this wall right now. The code generation got 10x faster, but the pipeline stayed the same. There's no unified stack for AI-native shipping yet. We're all cobbling it together in real time. And that tension between how fast you can build and how fast you can ship is the real challenge of this moment.
To view or add a comment, sign in
-
Explore related topics
- Claude's Contribution to Streamlining Workflows
- How Claude Code Transforms Team Workflows
- How AI Agents Are Changing Software Development
- How to Use AI Agents to Optimize Code
- How to Use AI Agents in Model-Centric Workflows
- Context Requirements for Successful AI Agents
- Best Use Cases for Claude AI
- Best Practices for Using Claude Code
- Why Context Engineering Matters for AI Agents
- How Agent Mode Improves Development Workflow
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
tu as des flow favoris ?