Most AI coding problems are actually issue-writing problems. That’s why I’m excited that our project now has two agents: Issue Hemingway writes. Kernel Thompson codes. Hemingway reads rough requests, asks the missing questions, and turns fuzzy ideas into implementation-ready issues. Thompson can then do what coding agents should do: build — instead of guess. We’re already eating our own dog food: #72 shows the writer agent asking follow-up questions #70 shows the refined issue that came out of it And this is not just for GitHub — it also works with self-hosted Gitea and GitLab instances. Sorry Bitbucket. You walked away from the issue-tracker character arc a little early. 🙂 Project: https://lnkd.in/dnzWSxrc I’m more and more convinced: the future is not just AI that writes code — it’s AI that helps define the work before the code gets written. #AI #OpenSource #DeveloperTools #GitHub #GitLab #Gitea #Automation #SoftwareEngineering
Tom Seidel’s Post
More Relevant Posts
-
Welcome to the new era of coding “vibe coding.” AI tools are growing fast and every developer is asking the same question: Which one is actually best for me? In this post, we’ll compare the top AI coding tools and help you find the right one for your workflow. So the truth is that there is not single “best” AI coding tool for everyone. After comparing Cursor, GitHub Copilot, Codex and Claude Code on speed, code quality, context retention, collaboration and cost, here’s the takeaway: 1. Cursor wins on speed. 2. Claude Code stands out for context retention. 3. GitHub Copilot leads in collaboration and cost. 4. Codex stays strong as a balanced all rounder option. So the best tool really depends on your workflow, not the hype. Which AI coding tool are you using right now and why? Comment on below. #AI #Coding #DeveloperTools #SoftwareDevelopment #Programming #GitHubCopilot #Cursor #Codex #ClaudeCode #ArtificialIntelligence Akhilesh Chaturvedi ABHISHEK KUMAR TRIPATHI Anuj Sharma Abhinandan singh MANEESH BARANWAL
To view or add a comment, sign in
-
-
# Day 11 - Claude Code: Your AI Coding Partner Forget autocomplete. Claude Code is a full AI coding AGENT that lives in your terminal. Here's what makes it different: Claude Code doesn't just suggest code snippets. It reads your entire codebase, plans a strategy, executes changes across multiple files, and verifies the results. Then it loops back if something isn't right. Key features that make it powerful: - Read & write files directly in your repo - Run shell commands, tests, and builds - Search and navigate large codebases intelligently - Full git integration - commits, diffs, PRs - Agentic loop - plans, acts, observes, iterates - Permission system - you stay in control The agentic loop is the secret sauce: 1. You describe the task 2. Claude plans the approach and picks tools 3. It executes - editing files, running commands 4. It verifies the output and loops back if needed What can it actually do? - Debug complex bugs across multiple files - Refactor entire codebases safely - Build new features from scratch - Generate tests and documentation This is what "AI-assisted development" actually looks like in 2026. Have you tried Claude Code yet? What was your first experience like? Drop it below! #ClaudeCode #AI #ArtificialIntelligence #CodingTools #DeveloperTools #AIAgent #Claude #Programming #SoftwareEngineering #AgenticAI #AIDaily #TechCommunity
To view or add a comment, sign in
-
-
If you use AI coding assistants like GitHub Copilot, Cursor, or Claude Code, you’ve likely hit the "𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗪𝗮𝗹𝗹." The AI tries to help, but it often lacks a deep understanding of how a change in one file ripples through the rest of your system. It either reads too much (wasting tokens and money) or reads too little (missing critical dependencies). This week for Finding AI Useful, I’ve been looking at code-review-graph a tool that changes the way LLMs "see" your code. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Standard AI tools use basic search to find relevant snippets. But software isn't just text; it’s a web of connections. If you change a data schema in your backend, the AI needs to know exactly which frontend components and API routes are impacted. 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: code-review-graph builds a local knowledge graph using Tree-sitter. It maps out functions, classes, and calls to create a "Structural Map" of your codebase. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗴𝗮𝗺𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝗿 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄: 🔹 𝗣𝗿𝗲𝗰𝗶𝘀𝗲 𝗖𝗼𝗻𝘁𝗲𝘅𝘁: It identifies the "blast radius" of any change. The AI only reads the files that are actually affected, leading to an 8x+ reduction in token usage. 🔹 𝗟𝗼𝗰𝗮𝗹 & 𝗣𝗿𝗶𝘃𝗮𝘁𝗲: Everything runs on your machine via SQLite. No code ever leaves your environment to build the index. 🔹 𝗠𝗼𝗻𝗼𝗿𝗲𝗽𝗼 𝗥𝗲𝗮𝗱𝘆: It’s built to handle thousands of files, filtering out the noise and focusing only on the logic that matters. 🔹 𝗠𝗖𝗣 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: It uses the Model Context Protocol, meaning it can plug into various AI editors to provide "graph-aware" suggestions. Check it out here: 👉 h͟t͟t͟p͟s͟:͟/͟/͟g͟i͟t͟h͟u͟b͟.͟c͟o͟m͟/͟t͟i͟r͟t͟h͟8͟2͟0͟5͟/͟c͟o͟d͟e͟-͟r͟e͟v͟i͟e͟w͟-͟g͟r͟a͟p͟h #FindingAIUseful #SoftwareDevelopment #GitHubCopilot #AI #Productivity #Coding #OpenSource
To view or add a comment, sign in
-
-
Hot take on AI coding tools 👇 After testing multiple tools: • Codex → best for accurate, structured code & full tasks • RooCode / Continue → best for command-based workflows & control • Copilot → great for daily coding… ⚠️ but mixing models sometimes creates messy or inconsistent output 👉 So the real answer is: There is no “best tool” — it depends on how you work. 👨💻 If you want: • precision → Codex • control → RooCode / Continue • speed → Copilot 💬 What’s your experience? Which one actually saves you time in real projects? #AI #Coding #Developers #SoftwareEngineering #GitHub #OpenAI #RooCode
To view or add a comment, sign in
-
-
You probably think GitHub Copilot is just fancy autocomplete... But here's what most people miss: AI Skills aren't simple automation. They're fundamentally different. While batch files and traditional automation follow rigid, pre-programmed rules, AI Skills analyze your *entire codebase*. They detect custom base classes, identify architectural patterns, understand your minimal APIs, and recognize your unique conventions. Then they trigger intelligent actions based on natural language—not scripts. The practical implication? You're not just saving keystrokes. You're getting a coding partner that understands *your* code, not generic code. It adapts to your team's patterns, your project's architecture, your specific way of building things. This changes everything for developers and technical leaders. It's the difference between a tool that helps you write code faster and a tool that actually understands what you're trying to build. So here's my question: Are you leveraging AI Skills to work *with* your codebase's unique patterns, or are you still treating them like advanced autocomplete? #AI #GitHub #Development #CodingTools
To view or add a comment, sign in
-
One word for your AI coding assistant: “PCR.” If you use Claude, Cursor, or other agentic tools, you have probably chained the same steps by hand: stage and commit, push, open a pull request, then remember to actually review the PR on GitHub. We packaged that as a single skill: PCR — Push → Create PR → Review. It runs the pipeline in order: Push — pre-flight checks, stage, commit, push (with sensible stops if there is nothing to commit or you are on the wrong branch). Create PR — open a PR into your target branch (for us, develop) and capture the PR number. Review — use GitHub’s review flow to approve or request changes, with clear reporting back (branch, commit, PR link, review outcome). Why treat this as a skill rather than a one-off prompt? Because skills are reusable contracts: the agent reads the linked sub-skills, follows safety rules (no force push unless asked, no merging from review, no secrets), and you get a consistent end-to-end outcome instead of half-finished pushes or PRs nobody reviewed. The idea transfers across stacks: Claude Code-style skill folders, Cursor skills, or any setup where you can point the model at markdown “how we ship” instructions. Same workflow, same guardrails, less context switching for humans. If you are standardizing how your team uses AI on real repos, naming the workflow (PCR) and encoding it as a skill beats retyping the checklist every time. #AIAgents #DeveloperTools #GitHub #PullRequests #DevOps #SoftwareEngineering #ClaudeAI #CursorAI #AIWorkflow #EngineeringProductivity #OpenSource #TechLeadership #CodingAssistant
To view or add a comment, sign in
-
🚀 A Single CLAUDE.md File Just Hit #1 on GitHub Trending — 44K Stars in 7 Days Most people try to fix AI coding assistants with new tools. But this repo solved it with one markdown file. No plugins. No setup. No dependencies. Just clear rules that stop LLMs from making the mistakes developers hate most. 👇 Karpathy pointed out common AI coding problems: → Making wrong assumptions silently → Overengineering simple tasks → Editing code nobody asked to change → Acting without clarifying goals Someone turned those lessons into CLAUDE.md — a behavior guide for Claude Code. 4 Rules Inside the File 1 → Think Before Coding → If requirements are unclear, ask questions → Don’t guess and run with one interpretation → Surface tradeoffs before coding 2 → Simplicity First → Write the minimum code needed → Avoid unnecessary abstractions → If 200 lines can be 50, simplify it 3 → Surgical Changes → Only modify what the task requires → Don’t refactor unrelated code → Don’t remove comments you don’t understand 4 → Goal-Driven Execution → Turn vague requests into measurable outcomes → Example: “Add validation” = write failing tests, then fix them Why It Went Viral Because developers want AI that: → Writes cleaner code → Makes smaller PRs → Asks better questions → Stops guessing intent One file. Immediate results. Drop it in your project root and Claude follows it from the first task. Link to the repo 👉 https://lnkd.in/dDt-G_4e AI won’t replace good engineers. But engineers who know how to guide AI will move faster than everyone else. Save this for later. Repost ♻️ if you believe prompting is becoming a real engineering skill. #AI #GitHub #SoftwareEngineering #Developers #Coding #Productivity #Tech
To view or add a comment, sign in
-
-
I spent 6 months drowning in AI skill repos. All promising to make AI assistants smarter. The problem? **I had no idea which ones actually worked.** So I built **Human Skills** — an automated library where every skill is personally tested by a human before it's included. **Why most skill libraries fail** They look great on paper, but they drift. Most files are written once and never run again. When you invoke them, the AI often hallucinations or fails. My personal library was becoming a mess faster than I could manage it. **How Human Skills fixes it** It’s an automated system with three layers: 1. **Upstream Tracking:** Auto-pulls updates from open-source repos daily via simple YAML configs. 2. **Selective Forwarding:** You cherry-pick only the skills you've actually verified. Only "promoted" skills enter your library. 3. **Automated Git Sync:** Once synced, it auto-commits and pushes to your GitHub. Zero manual steps. **The best part: Hot Reload** The sync daemon watches its own YAML configs. Add a new repo or change the schedule, and it adapts instantly without a restart. It stays out of your way and just works. **Why this matters for Devs** Personalization only works if you trust your toolkit. With Human Skills, I have: - A single source of truth for verified skills. - An automated pipeline keeping them fresh. - Portable setups via `{REPO_ROOT}` placeholders — clone and go. **The standard: If it hasn't been tested by a human, it doesn't belong here.** I’m keeping the library tight—only real-world verified skills. You can fork it, point it at your repos, and build your own trusted toolkit in minutes. **GitHub:** https://lnkd.in/gqcjuDQz Dealing with AI tool overload? Let's compare notes in the comments! 👇 #ArtificialIntelligence #OpenSource #AITools #Automation #SoftwareDevelopment #DeveloperProductivity #Python #GitAutomation #AIAssistants #BuildInPublic
To view or add a comment, sign in
-
-
🛠️ AI CHEAT CODE #028 🛠️ CI/CD pipeline failing at 2am? Never panic again 🚨 Here's your AI-powered pipeline rescue workflow: Step 1: Copy the FULL pipeline error log Step 2: Feed it to AI with this prompt: "This is my CI/CD failure log. Identify the root cause, the exact failing step, and give me the fix" Step 3: Ask: "Show me the corrected YAML config" Step 4: Paste into your .github/workflows or .gitlab-ci.yml That's it. Pipeline fixed in under 5 minutes. ⏱️ Step 5: Bonus — ask AI: "What can I add to this pipeline to prevent this class of failure in the future?" ⚡ Pro Tip: Keep a "pipeline debug" prompt template saved. Works for GitHub Actions, GitLab CI, Jenkins, CircleCI — all of them. Comment your CI/CD tool below 👇 — I'll share a specific prompt for it! #AI #CICD #DevOps #GitHub #GitLab #CloudComputing #Coding #Automation
To view or add a comment, sign in
-
Fast-moving AI workflows are notoriously difficult to tame, especially when it comes to deployment. Most solutions promise scalability but deliver resource-intensive overhead, making it challenging to balance speed and reliability. That's where mattpocock/skills comes in – a collection of agent skills that extend capabilities across planning, development, and tooling. This project is more than just a set of tools; it's a practical solution to the complexity of LLM and agent workflows. By providing a directory of skills that help developers think through problems before writing code, mattpocock/skills addresses a critical pain point in the AI development process. What sets mattpocock/skills apart is its focus on making agent behavior more reliable, not just more powerful. It achieves this through a range of skills, including: - to-prd — Turn the current conversation context into a PRD and submit it as a GitHub issue. No interview — just synthesizes what you've already discussed. - to-issues — Break any plan, spec, or PRD into independently-grabbable GitHub issues using vertical slices. - grill-me — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved. - design-an-interface — Generate multiple radically different interface designs for a module using parallel sub-agents. Built with Shell, mattpocock/skills is gaining traction fast – it added roughly 857 new stars in the current trending window, with a strong star momentum that usually indicates genuine developer word of mouth. Recent commits also make it feel active instead of abandoned. The traction makes sense: a repository sitting at #2 with around 857 new stars is usually solving a problem people can feel immediately. Repo: https://lnkd.in/gH4Zzms2 #GitHub #OpenSource #GitHubTrending #LinkedInForDevelopers #Shell #Skills
To view or add a comment, sign in
More from this author
Explore related topics
- The Future of Coding in an AI-Driven Environment
- How to Overcome AI-Driven Coding Challenges
- How AI Agents Are Changing Software Development
- How AI Affects Coding Careers
- AI Coding Tools and Their Impact on Developers
- How AI can Improve Coding Tasks
- The Role of AI in Programming
- AI Coding Solutions for Modern Challenges
- AI in DevOps Implementation
- How to Use AI Agents to Optimize Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development