GitHub shipped `gh skill`, a new CLI command for discovering, installing, and publishing agent skills. Skills are portable bundles of instructions, scripts, and context that teach AI agents how to do specific tasks. The mechanics are simple: `gh skill install` pulls a skill from a GitHub repository into your environment, `gh skill publish` shares one you've built. There's a discovery layer so you're not hunting for things by URL. The skills command makes it something you can install, version, and share systematically. If you want something closer to a full dependency manifest for agent config — skills, instructions, MCP servers, plugins all declared in a lockfile, APM is a great, comprehensive option and is worth a look alongside this. - gh skill: https://lnkd.in/gDkZ9HmV - microsoft/apm: https://lnkd.in/gsNFs-4P
GitHub Ships gh Skill CLI for AI Agent Instructions
More Relevant Posts
-
Manage agent skills with GitHub CLI Agent skills are the new way to give your agents super abilities !! GitHub is launching gh skill, a new command in the GitHub CLI that makes it easy to discover, install, manage, and publish agent skills from GitHub repositories. Skills are portable, reusable packages of instructions, scripts, and resources that extend the specialized capabilities of AI agents. Unlike generic instructions, skills allow an agent to become a "specialist" in a specific domain—such as legal workflows, data analysis, or debugging—by bundling complex procedural knowledge into a modular format. https://lnkd.in/g_wvBcrs
To view or add a comment, sign in
-
Over the past few months I've used Cursor, Claude Code, and GitHub Copilot side by side on the same projects. Each is pretty good at different things, but they share one annoying trait: every one of them wants to read your skills, slash commands, and rules from a different folder. After the third time I caught myself copy-pasting a skill between .cursor/, .claude/, and .github/ (and inevitably missing something), I built a small fix to keep the agents in sync. The idea is simple: - One canonical .agents/ folder at the root of your repo holds every skill, command, and rule. - A tiny sync script mirrors it into the exact paths each tool expects. - No symlinks (which break on Windows without Developer Mode and trip Cursor's symlink bug anyway), no runtime dependency, no git hooks. - Windows PowerShell and macOS/Linux bash are both supported. The repo ships with a small hello-world demo skill you can use to verify all three agents are picking up the same source, then delete once you're set up. If you're juggling multiple AI coding agents in a single workspace, this might save you an afternoon of frustration. MIT licensed and open to feedback. https://lnkd.in/gizgM_Nd #AI #DeveloperTools #OpenSource #Cursor #ClaudeCode
To view or add a comment, sign in
-
GitHub has introduced a new `gh skill` command in its CLI that makes it much easier to manage AI agent skills. With a simple command, developers can now discover, install, update, and publish skills directly from GitHub repositories, replacing manual setup with a streamlined, package-manager–like experience. On top of that, GitHub adds robustness features such as version pinning, immutable releases, and change detection based on Git metadata, helping ensure consistency, reproducibility, and security when sharing and evolving skills across teams. https://lnkd.in/dsha8y3K
To view or add a comment, sign in
-
I open-sourced aidock: a single script that runs AI coding agents inside a container. Copilot CLI, Claude Code, and Codex all get full filesystem access when you run them locally. That means one bad tool call can touch files it shouldn't, read your SSH keys, or wipe your home directory. Sandboxing is the obvious fix, but wiring up user namespaces, auth forwarding, and toolchains is tedious. aidock is a self-extracting Bash script that builds a Fedora container with 9 language servers, 4 MCP servers, and three AI agents pre-configured. One command to launch. Your project is mounted read-write; everything else stays out of reach. It works with Podman or Docker, seeds editable config on first run, and the entire environment is defined in a single Containerfile you control. github.com/ruifm/aidock
To view or add a comment, sign in
-
Three GitHub repos blew up this week. All three solve problems you probably have right now. 1. microsoft/markitdown Converts PDFs, Word docs, HTML, and images into clean Markdown. If you're building anything with LLMs and need to feed documents into a pipeline, this replaces your messy parsing scripts. One install. Works. 2. coleam00/Archon Defines your AI coding workflow in YAML. Think GitHub Actions but for coding agents. Plan, implement, validate, review, PR. Same steps every time. No more "I got different results than yesterday." Each run happens in an isolated git worktree so nothing bleeds across tasks. 3. multica-ai/multica If you're running multiple Claude Code or Codex sessions and manually switching terminals to track progress, Multica treats them like actual teammates. They claim tasks, report blockers, share skills across the team. Your code stays local. Their servers only coordinate state. None of these require you to change how you work. They slot into what you're already doing and remove the friction you've been tolerating. All three are open source. #AIAssistedDevelopment #GenAI #DeveloperTools #OpenSource #GitHubTrending
To view or add a comment, sign in
-
what Week 1 Taught Me I’m currently doing a 30-day Linux challenge with DEC, and Week 1 has already changed how I think about systems. As a data engineer, I’d worked on remote machines before, but I wanted to move from “comfortable enough” to truly confident. So I committed to 30 days of structured terminal practice on a shared Ubuntu VPS provided by DEC thanks to Najeeb Sulaiman and Data Engineering Community. No GUI, no sudo access, Just me, a terminal, and a daily learning routine. Here’s what Week 1 really taught me: Day 1: The terminal makes no assumptions My first mistake? Running sudo apt install tree in Windows PowerShell instead of my Linux terminal. First lesson: Linux does exactly what you type, not what you meant. I also created my first automated log file using command substitution to capture my username, hostname, shell, and timestamp. One Bash command. No manual input. That felt like the data engineering mindset in action. Day 2: One character can break everything I ran wc -1 instead of wc -l three times before spotting it. Then I tried WC and got command not found. Lesson: Linux is completely case-sensitive. The bigger takeaway was understanding > vs >>: > overwrites >> appends safely Better to learn that now than lose production logs later. Day 3: Permissions are not optional On a shared server, file permissions stop being theory. I created a fake credentials file and locked it down with: chmod 600 credentials.env Only I could read or edit it. That made one thing clear: securing sensitive files isn’t optional. I also learned that scripts don’t run by default. Making my first script executable and seeing “Pipeline complete!” felt like a real milestone. Day 4: Know your place on the server No sudo access could have felt restrictive. Instead, it was clarifying. Running id showed me exactly who I was on the system. Every learner’s home directory was locked down with 700 permissions. That’s not a limitation. That’s how production systems work. As a data engineer, understanding your access is part of the job. Day 5: grep is more than search I used grep to audit my own project: grep -rc "complete" ~/linux_challenge --include="*.md" Most results were zero. That showed me my documentation was behind my learning. That’s a data quality issue, and I found it with a command-line tool. My favourite pipeline from Week 1: find ~/linux_challenge -name "*.csv" | xargs wc -l Three simple tools. Clean, efficient, and useful. Biggest takeaway from Week 1: Linux isn’t just about commands. It teaches precision, accountability, and systems thinking. Every error message is feedback. Every permission is intentional. Every log tells the truth. Full challenge structure and daily logs: https://lnkd.in/ehhuZyNs #linuxChallenge #DataEngineering #DEC #30DaysofLearning
To view or add a comment, sign in
-
If you are using Github Copilot you need to be aware that your interactions and private repository code will be used to train their models unless you OPT-OUT of it. This is very subtle and most users will not even be aware that this may be happening. Time to protect your code and interactions and change the default setting to opt-out of this behaviour. https://lnkd.in/gEG9zf9X
To view or add a comment, sign in
-
Ever spent 10 minutes switching an AI provider… just to write one prompt? 😅 Yeah — editing .env, JSON, or TOML files every time you switch tools is not it. I’ve been exploring different AI coding CLIs lately, and this was easily one of the most frustrating parts. Then I came across CC Switch 👇 🚀 A single desktop app to manage Claude Code, Codex, Gemini CLI, OpenCode, and OpenClaw — all in one place. What makes it actually useful: ⚡ No more config headaches 50+ provider presets → import & switch in one click 🧠 Everything in sync MCP servers, prompts, and skills managed from one panel 🔄 Instant switching Change providers right from the system tray (no need to open the app) 🛠️ Built-in reliability layer Proxy + failover + health checks → handles the messy stuff for you 📊 Usage & cost tracking See tokens, requests, and spend across tools 💻 Works everywhere macOS, Windows, Linux — local-first with solid data safety 💭 My take: AI tools are becoming part of everyday dev workflows… but managing them still feels like duct-taping configs together. This feels like the kind of tool that should’ve existed already — simple, practical, and actually solves a real pain. If you're juggling multiple providers or running local setups (Ollama, etc.), definitely worth checking out 👇 https://lnkd.in/d3uMjfpd #AI #DeveloperTools #OpenSource #LLM #Productivity #DevTools
To view or add a comment, sign in
-
How do you manage agent skills, commands, and instructions in a monorepo + support multiple coding agent frameworks in parallel (Claude Code, Cursor, Codex)? I built a small open-source tool to help companies manage that! GitHub: https://lnkd.in/dExMDUWN NPM: npm i -g agsync-cli Happy to hear thoughts/requests 🙃
To view or add a comment, sign in
-
🚀 A few weeks ago, I was trying to configure Claude to handle everything in one place. One MCP config file. Database. GitHub. Playwright. REST APIs. Excel. All wired up together, all pointing at a single agent. It felt like the right call — one config, one agent, one place to manage everything. Clean. Simple. Efficient. 🎯 Then the config file started growing. 📈 Then the agent started behaving inconsistently. 🤔 Then I spent more time debugging the setup than actually building. 😩 I was frustrated, confused, and honestly a little overwhelmed. I kept adding more thinking one more tweak would fix it. It didn't. 💡 That's when it hit me — I wasn't dealing with a configuration problem. I was dealing with a complexity problem. And I was solving it in the wrong direction. Cramming 5 MCPs into one agent config doesn't simplify things. It just hides the mess behind a single file. 🗂️ 🔧 What actually made sense: → 🗄️ A dedicated agent with its own MCP config for database work → 🐙 A separate agent scoped only to GitHub operations → 🎭 Another for browser automation via Playwright → 🎯 Each agent doing one job, with only the tools it actually needs That's when multi-agent architecture stopped being a concept and became a solution. Not because it's trendy — but because separation of concerns is just good engineering, whether you're writing code or configuring agents. ⚙️ ✨ What changed after the split: ✅ Each config file was clean, focused, and easy to reason about ✅ Agents ran in parallel — no more sequential bottlenecks ⚡ ✅ When something broke, I knew exactly which agent to look at 🔍 ✅ Adding a new capability meant a new agent — nothing else touched 🧩 Sometimes the complexity you're fighting isn't in the tools. It's in how you've organized them. 🏗️ Multi-agent isn't about doing more. It's about each part doing less — and doing it well. 🎯 #MultiAgentSystems #AIEngineering #MCP #ClaudeAI #LLM #AgentDesign #BuildingWithAI
To view or add a comment, sign in
More from this author
Explore related topics
- How to Build Core Machine Learning Skills
- Key Skills for AI-Driven Development
- How to Build and Maintain AI Expertise
- AI Skills for Software Testing
- Essential Skills for Generative AI Projects
- Stages of AI Agent Skill Development
- Key Skills Needed for AI Teams
- How to Develop AI Complementary Skills
- How to Build Strong AI Teams
- AI Agents for Completing Online Tasks
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development