GitHub Copilot Agents Need Governance

Most organisations are letting GitHub Copilot agents loose with prompts and hoping for the best. That’s not an operating model. That’s a risk! To go deeper on the last post, I shared a repo — a practical way to train, govern, and standardise how GitHub Copilot agents behave inside your codebases. 👉 https://lnkd.in/eRQh8ms5 The premise is simple: Instead of prompting harder, codify engineering discipline so agents execute with — consistently and at scale. 🧠 What the Dojo enforces (the “Six Disciplines”): 🥋 Plan before striking → Agents must plan the work before touching code 🧩 Delegate with sub‑agents → Research & parallel analysis without flooding context 🔁 Learn from every fall → Lessons captured after corrections so mistakes don’t repeat ✅ Prove the technique → Tests, logs, diffs required before anything is “done” 🎯 Pursue elegant form → Agents challenge hacks and shortcut fixes 🐞 Fix what’s broken, solo → Reproduce → diagnose → fix → verify, no hand‑holding 📜 What’s actually in the repo: • skills.md → the core behavioral “kata” agents auto‑discover • .github/copilot-instructions.md → the house rules and workflow and lessons.md → self-improvement This is the same shift we made years ago with: • CI/CD pipelines • Cloud landing zones AI agents are next. 👀 Question for tech leaders: Are you treating AI agents as first‑class team members with standards and training? Or are they still freelancing in your organization’s repos? Because unmanaged agents don’t just move fast — they move fast in the wrong direction. #AgenticAI #GitHubCopilot #microsoft #EngineeringLeadership #AIArchitect #DevEx

  • text

InDecember 2025, Anthropic made Agent Skills an open standard. Cross-platform. Portable. Vendor-neutral. Here's what that means for anyone who's already forked the Dojo repo: - Your governance files already work with Claude Code. - No changes. No migration. No duplication. - The same skills.md your Copilot agent reads? Claude Code reads it too. One behavioral contract. Every AI coding agent that adopts the standard. Anthropics's framing nails it: "Building a skill for an agent is like putting together an onboarding guide for a new hire." That's exactly what the Dojo is. Except the new hire is your AI agent. And the onboarding doesn't reset every session. The governance work your team does today isn't locked to one tool. It travels. 👉 Anthropic's original article: https://claude.com/blog/equipping-agents-for-the-real-world-with-agent-skills

Great initiative — codifying agent behavior instead of hoping for the best is exactly the right direction. We've been exploring a complementary approach with what we call an AI Implementation Charter. Instead of focusing primarily on agent disciplines, we describe a normative hierarchy: principles → architecture contracts → coding standards → task context. The idea is that agents operate within clearly defined constraints. If a task conflicts with a higher-level rule, the agent should stop and escalate rather than improvise. In that sense, your Dojo helps guide how agents work, while our Charter focuses on what agents are allowed to do, with many of the constraints enforced through CI (ArchUnit, coverage gates, static analysis). Both perspectives address the same challenge from different angles, and they likely reinforce each other. Process discipline for agent workflows combined with computable constraints in the build pipeline seems like a strong foundation. Interesting to see this space maturing — treating agent governance as seriously as CI/CD pipelines feels like the natural next step.

Software Engineering guardrails and Guardian Agents following the SDLC for consistency. https://github.com/vbomfim/sdlc-guardian-agents

Excellent framework. I have created a pull request to add custom agents. Please review and merge when you have time.

Love Claude Code, nice to see it opening up.

See more comments

To view or add a comment, sign in

Explore content categories