As AI agents become core to our development workflows, a new governance problem is emerging: fragmented, unversioned prompts scattered across IDEs and local environments — what I'd call Instruction Drift. The new gh skill command in GitHub CLI is a meaningful step toward solving this. It treats Agent Skills as first-class citizens in the software delivery lifecycle: Centralized Discovery & Management — Standardize agent capabilities across the engineering org, installed via a single CLI command from any GitHub repository. Supply Chain Integrity for AI — Skills are pinned using git tree SHAs and immutable releases, ensuring the skill an agent uses today is byte-for-byte identical to what it uses tomorrow. No silent updates, no non-deterministic failures. Open Interoperability — Built on the open 'agentskills.io' spec, skills work across GitHub Copilot, Claude Code, Cursor, Codex, Gemini CLI, and more. No vendor lock-in. One important caveat worth noting: skills are not verified by GitHub — always inspect before installing (gh skill preview). This is exactly the kind of governance control your platform teams should be building policy around. Currently in public preview, but the architecture signals where this is heading: from experimental AI scripts to auditable, versioned, production-grade agent infrastructure. Read the full changelog #GitHubCopilot #GitHubCLI #AIAgents #AgentSkills #PlatformEngineering #AIGovernance #SupplyChainSecurity #EnterpriseAI #SolutionArchitect #DeveloperTools #DevOps #GenerativeAI #SoftwareEngineering #OpenSource #AI #developers #DeveloperCommunity #GitHub
GitHub CLI Introduces Agent Skills Governance
More Relevant Posts
-
GitHub’s new gh skill command is more important than it looks. Not because developers needed one more CLI command. Because it turns agent behaviour into something you can install, pin, update, and audit like software. That is a bigger shift than it sounds. A lot of useful know-how in AI coding workflows still lives in prompts, wiki pages, or tribal memory inside one team. Skills package that know-how into portable units with instructions, scripts, and resources that can move across hosts like Copilot, Claude Code, Cursor, Codex, and Gemini CLI. The part I like most is the boring part: versioning. Git tags. Tree SHAs. Immutable releases. Pinning. If skills shape how an agent works, treating them like unversioned snippets was never going to scale. I think this creates a new layer in the stack: • models generate • tools execute • skills encode repeatable working methods That middle layer is where a lot of durable advantage will sit. The teams that get the most from coding agents will not just pick the best model. They will build the best skills library for how they test, review, document, migrate, and run software. 🛠️ That is a much better asset than prompt folklore. 🔗 https://lnkd.in/g3nWMRXk #AIAgents #DeveloperTools #GitHubCLI #SoftwareEngineering #AIEngineering
To view or add a comment, sign in
-
Github Agentic Workflows `gh aw` enables you to define a workflow as a natural language prompt. You can run it on a schedule, on new issue, comment, or pull request. You can choose to run the prompt with Copilot, Claude, Codex or Gemini CLI. Some examples from the Agentic Workflows Workshop of what you can do: 1. Schedule a daily summary of open issues and PRs in this repository. Grouped by label. 2. If a gh issue comment starts with "/hn-sentiment <hackernews URL>", fetch the article's top 50 comments. Summarize the sentiment with examples. Comment back here on the issue. I tried it today and found: - it uses github actions minutes (2-3 mins for each example). - it invokes two sessions on the CLI: first, to execute your workflow prompt and capture the output. Second, for threat detection, before publishing the output. This costs premium requests or tokens, depending on the API cost model. - it requires an API KEY no matter which agent you choose (even for github free tier). - you can choose the agent "engine" (the wrapped CLI), but you can't choose the model yet. - it "compiles" markdown definition to a complex github actions workflow (1000 lines). - it failed to authenticate to Gemini API (missing creds in the env . I reported the issue). - it feels a little bit like its held together with a gaffer tape at this moment. - however, the `gh aw` is powerful and workflows are changing fast (the workshop was already outdated!) There's a Workshop link in the comments... https://lnkd.in/gqm_6jYS #github #agentic #workflows
To view or add a comment, sign in
-
GitHub Copilot Controversy Highlights Challenges in AI-Assisted Development The recent controversy surrounding GitHub Copilot and AI-generated pull request messages has sparked discussions about transparency and developer trust. As AI tools become more integrated into software development, maintaining clarity, accountability, and ethical use is becoming increasingly important. This case reflects the evolving dynamics between automation and human oversight in coding environments. 🔗 Read more: https://lnkd.in/gCkBBEPP #GitHub #Copilot #ArtificialIntelligence #SoftwareEngineering #DeveloperTools #TechIndustry #Innovation #TechGenyz
To view or add a comment, sign in
-
Most people building AI agent systems reach for a new database or a custom orchestration layer. I used GitHub. Because GitHub is already a software development state machine. Issues = requirements. Branches = isolated work. PRs = review. Actions = automation. Every step of a real dev process already has a native GitHub primitive. No new infrastructure needed. Two more decisions that shaped everything: **Agent behaviour in markdown, not code.** Each agent has a `.md` role file anyone can read and edit. Adding a new agent is: write a file, create a subclass. Minutes, not days. **One token, no new accounts.** The token already in every Actions workflow is all the system needs. Works locally. Works in CI. These three choices — GitHub as backbone, markdown as behaviour, one token — turned out to be the most important architecture decisions I made. Post 1 of my series on building ai-dev-team 👇 https://lnkd.in/eNgdVqFi Code: https://lnkd.in/eWNqAFpB #AI #SoftwareArchitecture #MultiAgentSystems #OpenSource #GitHubCopilot
To view or add a comment, sign in
-
Developers: you may want to check your GitHub settings before April 24. GitHub is updating its policy so interactions with personal repositories may be used for AI model training. If you’re using personal repos and don’t want that data included, you’ll need to opt out manually. Copilot Business and Enterprise users are not affected. Official announcement: https://lnkd.in/eMXCDsuF To opt out: Profile → Settings → Copilot → Features → Privacy Are you opting out, or are you fine with your repos being used for training? On one hand, they are public. Any unscrupulous actor could already be using them. If you're already using GitHub Coding Agent, it may improve your experience. An option to differentiate training on public vs private repos might make the decision easier. The blog announcement linked above includes this statement under what will not be used for training irrespective of your choice: *Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.* Interaction data is defined as: "specifically inputs, outputs, code snippets, and associated context." That sounds like it includes commits. #github #ai #developer #dataprivacy #softwaredevelopment
To view or add a comment, sign in
-
-
10 Best Practices for GitHub Copilot's /fleet Command /fleet lets you run multiple AI agents in parallel. One prompt. Multiple sub-agents. All working simultaneously. Here's how to use it like a pro: 1. Write artifact-specific prompts "Create docs/auth.md, docs/api.md, docs/deploy.md" — not "improve docs." Each sub-agent needs a clear deliverable. 2. Declare dependencies explicitly If index.md depends on other files, say so. The orchestrator respects the order. 3. Pick tasks that are truly independent Separate files, separate modules, batch refactoring. Parallel = no shared state. 4. Avoid monolithic prompts Vague prompts force sequential execution. Specificity unlocks parallelism. 5. Use custom agents for specialized work Point test generation to @test-writer, docs to @docs-agent. Right tool for each sub-task. 6. Review merge conflicts carefully More parallel changes = more potential collisions. Extra oversight is non-negotiable. 7. Use --no-ask-user for CI/CD Running /fleet in pipelines? Add the flag. No human in the loop = no blocking prompts. 8. Monitor with Agent HQ GitHub's Mission Control lets you view, compare, and triage all agent sessions from one dashboard. 9. Set up copilot-instructions.md Global coding conventions + repo-specific context = consistent output across all parallel agents. 10. Track usage and costs More agents = more credits burned. Set budgets per project. Parallel power isn't free. /fleet turns your weekend maintenance backlog into a 20-minute task. The developers who learn to orchestrate agents will outpace those who code alone. #GitHubCopilot #FleetCommand #CopilotCLI #AgenticAI #MultiAgent #DeveloperProductivity
To view or add a comment, sign in
-
-
GitHub Copilot is no longer just a chatbot it's a teammate with its own computer. 1. It Has Its Own "Brain" (The Runner) Most users think Copilot just predicts text. Fact: When an Agent handles a complex task, GitHub spins up a hidden Linux container. The Agent uses this "sandbox" to run your code, execute tests, and fix bugs in real-time before you ever see the final result. It’s not guessing; it’s experimenting. 2. Context Beyond the Code (MCP) Copilot Agents are breaking out of the IDE. Using the Model Context Protocol (MCP), they can now "see" outside your code. They can read your Jira tickets, search Slack conversations for requirements, and check Sentry logs to find why a production build failed. It’s the first time an AI tool has "peripheral vision" across your whole company. 3. The Hidden "AGENTS.md" Pro developers are moving past simple prompts and using AGENTS.md files. By placing this in your repo, you provide a "Manual" for the AI. It forces the Agent to follow your specific architectural rules—like "never use this library" or "always wrap API calls in this specific handler"—making the AI act like a senior dev who already knows your codebase. 4. Automatic Self-Correction Before an Agent submits a Pull Request, it runs a Self-Security Scan. If it accidentally writes a security vulnerability (like an SQL injection), it detects it using built-in CodeQL tools, wipes the code, and rewrites it. It essentially peer-reviews itself so you don't have to. 5. Model Agnostic Logic The Agent isn't "stuck" with one AI. It uses dynamic routing. It might use a fast model for simple edits, but if you ask for a complex refactor, it automatically switches to a high-reasoning model (like Claude 3.5 or Gemini) to handle the heavy lifting. It chooses the best "brain" for the job. #GitHub #AIAgents #SoftwareEngineering #FutureOfWork #GitHubCopilot
To view or add a comment, sign in
-
-
Big news for AI-assisted development 👀 Two major coding agents just landed inside GitHub Copilot for Business and Pro users. Claude by Anthropic and OpenAI Codex are now available directly within GitHub Copilot for Business and Pro customers. Enterprise and Pro+ had early access, and now this is rolling out more broadly. Here’s what matters. You can run Claude, Codex, and Copilot: - On github.com - In GitHub Mobile - Inside VS Code Same workflows. Shared history. Shared context. No context switching. And no extra subscriptions. It’s included in your existing Copilot plan. During public preview, each coding agent session consumes one premium request. One platform. Multiple agents. 🧠 All agents run on a unified platform inside GitHub with: - Repository code and history access - Issues and pull requests context - Copilot Memory - Repository instructions and policies - Enterprise governance via the Agent Control Plane (now GA) This is important. We are moving from “AI features” to an agent layer embedded directly into the SDLC, governed and observable at enterprise scale. What you can actually do - Start sessions on web or mobile - Assign agents to issues and PRs - Mention @copilot, @claude, or @codex in PR comments - Let agents open draft PRs - Compare approaches across agents For me, the bigger shift is this: We’re no longer debating which model is better in isolation. We’re orchestrating multiple agents inside one governed developer platform. That changes how teams experiment, compare, and standardize. Have you started running side-by-side agent comparisons in real repos yet? 🤔 #GitHubCopilot #AINativeDevelopment #AgenticAI #msftadvocate
To view or add a comment, sign in
-
-
I stopped caring about GitHub stars. Here's why you should too. GitHub stars used to mean something. A stamp of quality. Engineers vouching for good code. Not anymore. The AI wave broke that signal. Stars now measure virality, not usability. Here's what I found after testing 15+ viral AI repos over the last few weeks. 1. Most viral repos are barely functional Clone. Install. Run. Crash. Half of them break on step one. The README looks polished. The code doesn't. 2. Hardcoded configs everywhere API keys in source. Paths assuming Mac. Model names baked into strings. Zero thought given to anyone actually running this in production. 3. AI slop results The demo looks incredible. The actual output on your data? Borderline useless. Cherry picked examples in the README. Real world performance nowhere close. 4. One developer. No maintenance. Check the commit history. Big burst of 50 commits in week one. Then silence. Issues pile up. PRs rot. The author moved on to the next viral idea. 5. Stars are mass produced now Tweet goes viral. HN front page. 10K stars in 48 hours. 90% of those people never cloned the repo. They starred and scrolled. 6. README driven development Beautiful docs. Architecture diagrams. Fancy badges. Open the source? 200 lines of spaghetti wrapping an API call. 7. The real signal is in the Issues tab Stars tell you nothing. Issues tell you everything. Are bugs getting fixed? Are maintainers responding? Is anyone actually using this in prod? What I do instead I stopped sorting by stars. Now I look at three things. Commit frequency in the last 90 days. Ratio of closed issues to open ones. Whether anyone in the Issues tab is running it in production, not just "cool project!"
To view or add a comment, sign in
-
Not all GitHub repos need thousands of lines of code to be valuable. Stumbled across something on GitHub trending that made me stop and read twice. One repository with over 90k stars, and it's a single markdown file! 🤯 A set of guidelines specifically for Claude Code that helps reduce common LLM mistakes, like ignoring your intent, modifying code it shouldn't touch, or drifting from what you actually asked for. I've been using Claude Code on my personal projects and this is now a permanent part of my workflow. Worth using as a foundation for your own AI development workflow. https://lnkd.in/dHsnrzNG #claudecode #AI #webdevelopment #developer
To view or add a comment, sign in
More from this author
Explore related topics
- How AI Agents Are Changing Software Development
- How to Improve Agent Interoperability
- How to Use AI Agents to Optimize Code
- AI in DevOps Implementation
- How Agent Mode Improves Development Workflow
- Using Asynchronous AI Agents in Software Development
- Key Skills for AI-Driven Development
- How to Boost Productivity With Developer Agents
- How to Use AI Agents to Streamline Digital Workflows
- How to Build Strong AI Teams
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
I build AI systems that won’t get you fined | EU AI Act | MLOps & AI Security | CEO @ DeviDevs
1wInstruction Drift is a great way to frame this. We run into the same problem with ML pipelines - prompts that work in dev silently break in production because someone edited a template without version control. The SHA pinning approach is exactly what regulated industries need. Under EU AI Act, you need to prove your AI system behaves deterministically across versions. Unversioned prompts make that impossible.