I spent 6 months drowning in AI skill repos. All promising to make AI assistants smarter. The problem? **I had no idea which ones actually worked.** So I built **Human Skills** — an automated library where every skill is personally tested by a human before it's included. **Why most skill libraries fail** They look great on paper, but they drift. Most files are written once and never run again. When you invoke them, the AI often hallucinations or fails. My personal library was becoming a mess faster than I could manage it. **How Human Skills fixes it** It’s an automated system with three layers: 1. **Upstream Tracking:** Auto-pulls updates from open-source repos daily via simple YAML configs. 2. **Selective Forwarding:** You cherry-pick only the skills you've actually verified. Only "promoted" skills enter your library. 3. **Automated Git Sync:** Once synced, it auto-commits and pushes to your GitHub. Zero manual steps. **The best part: Hot Reload** The sync daemon watches its own YAML configs. Add a new repo or change the schedule, and it adapts instantly without a restart. It stays out of your way and just works. **Why this matters for Devs** Personalization only works if you trust your toolkit. With Human Skills, I have: - A single source of truth for verified skills. - An automated pipeline keeping them fresh. - Portable setups via `{REPO_ROOT}` placeholders — clone and go. **The standard: If it hasn't been tested by a human, it doesn't belong here.** I’m keeping the library tight—only real-world verified skills. You can fork it, point it at your repos, and build your own trusted toolkit in minutes. **GitHub:** https://lnkd.in/gqcjuDQz Dealing with AI tool overload? Let's compare notes in the comments! 👇 #ArtificialIntelligence #OpenSource #AITools #Automation #SoftwareDevelopment #DeveloperProductivity #Python #GitAutomation #AIAssistants #BuildInPublic
Automated AI Skill Library with Human Verification
More Relevant Posts
-
Fast-moving AI workflows are notoriously difficult to tame, especially when it comes to deployment. Most solutions promise scalability but deliver resource-intensive overhead, making it challenging to balance speed and reliability. That's where mattpocock/skills comes in – a collection of agent skills that extend capabilities across planning, development, and tooling. This project is more than just a set of tools; it's a practical solution to the complexity of LLM and agent workflows. By providing a directory of skills that help developers think through problems before writing code, mattpocock/skills addresses a critical pain point in the AI development process. What sets mattpocock/skills apart is its focus on making agent behavior more reliable, not just more powerful. It achieves this through a range of skills, including: - to-prd — Turn the current conversation context into a PRD and submit it as a GitHub issue. No interview — just synthesizes what you've already discussed. - to-issues — Break any plan, spec, or PRD into independently-grabbable GitHub issues using vertical slices. - grill-me — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved. - design-an-interface — Generate multiple radically different interface designs for a module using parallel sub-agents. Built with Shell, mattpocock/skills is gaining traction fast – it added roughly 857 new stars in the current trending window, with a strong star momentum that usually indicates genuine developer word of mouth. Recent commits also make it feel active instead of abandoned. The traction makes sense: a repository sitting at #2 with around 857 new stars is usually solving a problem people can feel immediately. Repo: https://lnkd.in/gH4Zzms2 #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Shell #Skills
To view or add a comment, sign in
-
Running multiple AI agents on the same repo without them stepping on each other took me longer to figure out than I'd like to admit. The answer was already built into Git. git worktrees lets you have multiple branches of the same repository checked out simultaneously in separate directories, no stashing, no cloning, no context-switching chaos. Some of my favorite use cases: → Review a branch locally while keeping your WIP untouched in a separate folder → Run separate directories for features, bug fixes, or experiments in parallel → Have multiple AI agents working on the same repo without overlapping (my personal favorite) Here's a quick cheat sheet to get started 👇 Tomorrow I'll show you my favorite way to use it — the one that changed how I work with agentic tools entirely. #AgenticCoding #Git #AI #Dev #Bash
To view or add a comment, sign in
-
-
Unit testing is one of those things the entire industry agrees on in principle but struggles with in practice. Not because teams don't care. The challenge is that writing tests lives outside the natural flow of writing code in a separate file and when timelines get tight, that step slips. So I asked a simple question: what if the test file just showed up the moment you saved your code? No separate step. No context switch. Just there. That's the idea behind SilentSpec. It's a VS Code extension that generates unit tests when you save a TypeScript file. It reads your exports, function signatures, types, and edge cases, then creates a matching test file in the background. It connects directly to your AI provider — github models (default), OpenAI, Anthropic, or Ollama for fully local generation. Auto-detects Jest or Vitest from your project so the output runs without modification. v1.0 is live. Free and open source. What's worked for your team to keep test coverage consistent as a project grows? #vscode #typescript #testing #softwareengineering #developers #opensource #buildinpublic #AI
To view or add a comment, sign in
-
Title: Stop Reading Docs. Start Using the “Lazy Teardown” Method. 🛠️🚀 Most people learn new tech by following a tutorial step-by-step. It’s slow, it’s passive, and it’s boring. 🥱 I’ve switched to a high-speed alternative: The Lazy Teardown. 📉 Instead of starting with "Hello World," I force an AI agent (like Amazon Q) to build the entire "Real World" first. I learn by deconstructing a finished product rather than building from zero. 🏗️ The 4-Step Workflow: 📂 1️⃣ The Void: Open a completely empty folder in your IDE. No boilerplate, no templates. 🧊 2️⃣ The "Big Bang" Prompt: Don’t ask for a snippet. Ask for the system. "Initialize a full BDD prototype using Behave and Python for a simple banking app." 🤖 3️⃣ The Live Show: Sit back and watch the AI architect the project—folders, feature files, and logic—in seconds. It’s like watching a masterclass in real-time. 📺 4️⃣ Learning via Failure: Run the code. Watch it crash. Watch the AI scramble to fix it. 🔧 Why this is a "Smart" way to be "Lazy": 💡 They say smart people learn from others' mistakes. In this workflow, the AI is your fall guy. 🧪 When the code hits an error, you aren’t just reading a generic solution on Stack overflow. You’re watching a live troubleshooting session tailored specifically to your project. You see the "Why" behind every architectural choice because you saw what happened when it was missing. The Result? I just mastered BDD/Behave with Python 🐍 in a single afternoon. No books, no courses. Just a series of prompts, errors, and teardowns. 🔄 If you have an AI agent in your IDE, stop using it as a calculator. Start using it as a construction crew. Build it, break it, and learn from the rubble. 🏗️💥 #AI #SoftwareEngineering #CareerHacks #Python #BDD #VibeCoding #GrowthMindset #ReverseEngineering
To view or add a comment, sign in
-
-
Most AI coding problems are actually issue-writing problems. That’s why I’m excited that our project now has two agents: Issue Hemingway writes. Kernel Thompson codes. Hemingway reads rough requests, asks the missing questions, and turns fuzzy ideas into implementation-ready issues. Thompson can then do what coding agents should do: build — instead of guess. We’re already eating our own dog food: #72 shows the writer agent asking follow-up questions #70 shows the refined issue that came out of it And this is not just for GitHub — it also works with self-hosted Gitea and GitLab instances. Sorry Bitbucket. You walked away from the issue-tracker character arc a little early. 🙂 Project: https://lnkd.in/dnzWSxrc I’m more and more convinced: the future is not just AI that writes code — it’s AI that helps define the work before the code gets written. #AI #OpenSource #DeveloperTools #GitHub #GitLab #Gitea #Automation #SoftwareEngineering
To view or add a comment, sign in
-
-
👉 Most people keep learning…But never actually build anything. 🚀 Build Your First AI Agent in 30 Days (2026 Roadmap) Let’s fix that 👇 🧠 The Goal 👉 In the next 30 days: Build and deploy 1 real AI Agent Not theory. Not tutorials. A working system. 🗓️ WEEK 1 — Foundation (Think Like an Agent) 👉 Learn how AI agents actually work LLM basics (GPT, Claude, Gemini) Prompting Tool calling API basics 🎯 Outcome: 👉 You understand Agent = Think → Act → Observe ⚙️ WEEK 2 — Build First Agent (No-Code) 👉 Tools: n8n / Dify Create workflows Connect AI APIs Add Google Sheets / simple tools 🎯 Build: 👉 AI Lead Generator Agent (User → AI → Save data) 🧑💻 WEEK 3 — Agentic Coding (Real Builder Mode) 👉 Tools: Claude Code / Codex Python basics (only what’s needed) API calls + JSON Build simple AI agent in code 🎯 Build: 👉 Python AI Agent (with tool calling) ⚡ WEEK 4 — Make It REAL (Deploy + Improve) FastAPI (create API) Add memory Test + improve responses Deploy (Render / Railway / Vercel) 🎯 Final Output: 👉 Live AI Agent (usable by real users) 🚀 🔥 What You’ll Achieve in 30 Days ✅ 1 deployed AI agent ✅ Real understanding (not theory) ✅ Python + API basics ✅ Confidence to build more ⚠️ Most People Fail Because: They learn tools, not systems They don’t build projects They overthink instead of executing 💡 The New Rule (2026) 👉 Don’t chase tools 👉 Build systems 🎯 If You’re Starting Today Follow this: Learn basics (Week 1) Build simple agent (Week 2) Move to coding (Week 3) Deploy (Week 4) 👉 Repeat → Scale → Monetize 🔥 Final Line You don’t need 10 courses. You need 1 working AI agent #AIAgents #AgenticAI #GenerativeAI #ArtificialIntelligence #AIArchitecture #LangGraph #MCP #RAG #Python #FastAPI #SaaS #BuildInPublic #TechStartups #Innovation
To view or add a comment, sign in
-
-
If you use AI coding assistants like GitHub Copilot, Cursor, or Claude Code, you’ve likely hit the "𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗪𝗮𝗹𝗹." The AI tries to help, but it often lacks a deep understanding of how a change in one file ripples through the rest of your system. It either reads too much (wasting tokens and money) or reads too little (missing critical dependencies). This week for Finding AI Useful, I’ve been looking at code-review-graph a tool that changes the way LLMs "see" your code. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Standard AI tools use basic search to find relevant snippets. But software isn't just text; it’s a web of connections. If you change a data schema in your backend, the AI needs to know exactly which frontend components and API routes are impacted. 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: code-review-graph builds a local knowledge graph using Tree-sitter. It maps out functions, classes, and calls to create a "Structural Map" of your codebase. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗴𝗮𝗺𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝗿 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄: 🔹 𝗣𝗿𝗲𝗰𝗶𝘀𝗲 𝗖𝗼𝗻𝘁𝗲𝘅𝘁: It identifies the "blast radius" of any change. The AI only reads the files that are actually affected, leading to an 8x+ reduction in token usage. 🔹 𝗟𝗼𝗰𝗮𝗹 & 𝗣𝗿𝗶𝘃𝗮𝘁𝗲: Everything runs on your machine via SQLite. No code ever leaves your environment to build the index. 🔹 𝗠𝗼𝗻𝗼𝗿𝗲𝗽𝗼 𝗥𝗲𝗮𝗱𝘆: It’s built to handle thousands of files, filtering out the noise and focusing only on the logic that matters. 🔹 𝗠𝗖𝗣 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: It uses the Model Context Protocol, meaning it can plug into various AI editors to provide "graph-aware" suggestions. Check it out here: 👉 h͟t͟t͟p͟s͟:͟/͟/͟g͟i͟t͟h͟u͟b͟.͟c͟o͟m͟/͟t͟i͟r͟t͟h͟8͟2͟0͟5͟/͟c͟o͟d͟e͟-͟r͟e͟v͟i͟e͟w͟-͟g͟r͟a͟p͟h #FindingAIUseful #SoftwareDevelopment #GitHubCopilot #AI #Productivity #Coding #OpenSource
To view or add a comment, sign in
-
-
You probably think GitHub Copilot is just fancy autocomplete... But here's what most people miss: AI Skills aren't simple automation. They're fundamentally different. While batch files and traditional automation follow rigid, pre-programmed rules, AI Skills analyze your *entire codebase*. They detect custom base classes, identify architectural patterns, understand your minimal APIs, and recognize your unique conventions. Then they trigger intelligent actions based on natural language—not scripts. The practical implication? You're not just saving keystrokes. You're getting a coding partner that understands *your* code, not generic code. It adapts to your team's patterns, your project's architecture, your specific way of building things. This changes everything for developers and technical leaders. It's the difference between a tool that helps you write code faster and a tool that actually understands what you're trying to build. So here's my question: Are you leveraging AI Skills to work *with* your codebase's unique patterns, or are you still treating them like advanced autocomplete? #AI #GitHub #Development #CodingTools
To view or add a comment, sign in
-
🚀 Built something I’m genuinely proud of… Over the past few days, I’ve been working on an AI-powered full-stack project builder — a system that can generate, validate, and debug complete applications automatically. But this isn’t just another “generate code with AI” project 👇 💡 What makes it different: • 🧠 Adaptive multi-LLM system — dynamically selects the best provider based on success rate + latency • ⚡ Parallel file generation using ThreadPoolExecutor for faster builds • 🧩 Wave-based dependency resolution — ensures correct build order across files • 🧪 Integrated tester + debugger loop — validates and fixes code automatically • 🧠 Memory system (short-term + long-term) — learns from past failures and improves future builds • 🚫 Strict validation rules — prevents common real-world bugs (auth issues, API misuse, bad imports) • 💾 Smart caching — avoids redundant LLM calls (important for low-resource environments) 🔧 Built with a focus on real-world constraints: Works on limited resources (no heavy vector DBs) Handles API rate limits intelligently Optimized token usage per file type 📊 The goal: Move from “AI generates code” ➝ to “AI builds working systems reliably” This project taught me a lot about: System design over prompt engineering Reliability > raw intelligence How to make AI systems practical, not just impressive Still improving it — but this version feels like a solid step toward building autonomous dev systems. Would love to hear your thoughts or suggestions 👇 #AI #MachineLearning #FullStack #SystemDesign #Python #Automation #OpenAI #BuildInPublic
🚀 Built something I’m genuinely proud of… Over the past few days, I’ve been working on an AI-powered full-stack project builder — a system that can generate, validate, and debug complete applications automatically. But this isn’t just another “generate code with AI” project 👇 What makes it different: • Adaptive multi-LLM system — dynamically selects the best provider based on success rate + latency • Parallel file generation using ThreadPoolExecutor for faster builds • Wave-based dependency resolution — ensures correct build order across files • Integrated tester + debugger loop — validates and fixes code automatically • Memory system (short-term + long-term) — learns from past failures and improves future builds • Strict validation rules — prevents common real-world bugs (auth issues, API misuse, bad imports) • Smart caching — avoids redundant LLM calls (important for low-resource environments) Built with a focus on real-world constraints: Works on limited resources (no heavy vector DBs) Handles API rate limits intelligently Optimized token usage per file type The goal: Move from “AI generates code” ➝ to “AI builds working systems reliably” This project taught me a lot about: System design over prompt engineering Reliability > raw intelligence How to make AI systems practical, not just impressive Still improving it — but this version feels like a solid step toward building autonomous dev systems. Would love to hear your thoughts or suggestions 👇 #AI #MachineLearning #FullStack #SystemDesign #Python #Automation #OpenAI #BuildInPublic
To view or add a comment, sign in
-
𝗜 𝗿𝗲𝘃𝗶𝗲𝘄𝗲𝗱 𝗲𝘃𝗲𝗿𝘆 𝗽𝗶𝗲𝗰𝗲 𝗼𝗳 𝗔𝗜-𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗲𝗱 𝗰𝗼𝗱𝗲 𝗜 𝘀𝗵𝗶𝗽𝗽𝗲𝗱 𝗼𝘃𝗲𝗿 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝗳𝗲𝘄 𝗺𝗼𝗻𝘁𝗵𝘀. 𝟲𝟬–𝟳𝟬% needed fixes before it was production-ready. Not because 𝗔𝗜 is bad. Because I was treating it wrong. The wake-up moment: 𝗔𝗜 wrote tests. All of them passed. ✅ 𝗖𝗜 was green. Code looked clean. Shipped it. Wrong data silently hit production. The tests weren't testing correctness. They were testing the same wrong assumptions baked into the code itself. 𝗚𝗿𝗲𝗲𝗻 𝗖𝗜. 𝗕𝗿𝗼𝗸𝗲𝗻 𝗹𝗼𝗴𝗶𝗰. 𝗭𝗲𝗿𝗼 𝘄𝗮𝗿𝗻𝗶𝗻𝗴𝘀. That's when I stopped thinking of AI as a coding tool and started treating it like a very fast junior engineer — one that needs direction, not just prompts. Here's what I've learned since: 𝟭. "𝗜𝘁 𝘄𝗼𝗿𝗸𝘀" = "𝗜𝘁'𝘀 𝗰𝗼𝗿𝗿𝗲𝗰𝘁" AI-generated SQL can run perfectly and return wrong data. Silent join mistakes. No errors. Just corrupted dashboards. 𝟮. 𝗛𝗮𝗽𝗽𝘆 𝗽𝗮𝘁𝗵 = 𝗣𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻 𝗿𝗲𝗮𝗹𝗶𝘁𝘆 AI optimizes for the demo, not the 3 AM incident. APIs fail. Data is dirty. Retries cascade. 𝟯. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗱𝗲𝗰𝗮𝘆𝘀 𝗳𝗮𝘀𝘁 Constraints you set 10 prompts ago quietly disappear. If you don't restate them, the system drifts — and you won't notice until it's too late. 𝟰. 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗶𝘀 𝗻𝗲𝘃𝗲𝗿 𝘁𝗵𝗲 𝗱𝗲𝗳𝗮𝘂𝗹𝘁 Input validation, rate limiting, error handling — almost never included unless you explicitly demand it. AI assumes a safe world. Production doesn't. 𝟱. 𝗖𝗹𝗲𝗮𝗻 𝗰𝗼𝗱𝗲 = 𝗠𝗮𝗶𝗻𝘁𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 Readable functions. Inconsistent design. Painful refactors later. 𝟲. 𝗧𝗲𝘀𝘁𝘀 𝗰𝗮𝗻 𝗹𝗶𝗲 AI writes tests that pass — validating the exact same wrong assumptions as the code. This one will cost you. 𝟳. 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗻𝗲𝘃𝗲𝗿 𝗺𝗼𝘃𝗲𝘀 When prod breaks, nobody asks the model. ━━━━━━━━━━━━━━━━━━ The engineers winning with AI aren't better prompters. They're clearer thinkers. 𝗔𝗜 𝗶𝘀 𝗹𝗲𝘃𝗲𝗿𝗮𝗴𝗲 — 𝗯𝘂𝘁 𝗼𝗻𝗹𝘆 𝗶𝗳 𝘆𝗼𝘂 𝘀𝘁𝗶𝗹𝗹 𝗼𝘄𝗻 𝘁𝗵𝗲 𝗷𝘂𝗱𝗴𝗺𝗲𝗻𝘁. ━━━━━━━━━━━━━━━━━━ Has AI-generated code ever silently broken something in your system? Drop it below — the stories nobody posts are the most useful ones 👇 #SoftwareEngineering #AITools #DataEngineering #BackendDevelopment #LessonsLearned #Python
To view or add a comment, sign in
-
Explore related topics
- How to Build and Maintain AI Expertise
- How Developers can Trust AI Code
- How to Use Personal AI Assistants for Daily Tasks
- How to Balance AI Automation with Developer Skills
- How to Become Indispensable Using AI
- How to Train Employees on AI Tools
- How to Personalize Customer Interactions with AI
- How to Use AI for Manual Coding Tasks
- How to Balance AI Automation With Human Development
- How to Humanize AI Using Prompts
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development