A repo called "everything-claude-code" just hit 140K stars. One engineer's full Claude Code setup, open-sourced after 10 months of daily use. He won an Anthropic hackathon with it. Here's what's inside: CLAUDE.md + Skills ↳ Persistent instructions Claude reads every session ↳ 156 reusable slash commands for workflows that repeat ↳ Code review standards, commit styles, naming conventions — all encoded Hooks + Memory ↳ Shell commands that fire before/after every Claude action ↳ Auto-format, auto-lint, auto-log — without thinking about it ↳ Memory files so Claude knows who you are across sessions MCP Servers ↳ Claude connected to Jira, Slack, GitHub, custom APIs ↳ Not just coding — the full development workflow in one place Agent Teams ↳ 38 specialized sub-agents working in parallel ↳ Orchestrator decides. Workers execute. Quality gates review. ↳ Scheduled tasks running overnight. /loop for polling. 10 months of daily use, distilled into one install. Worth a look. https://lnkd.in/gEJMzVEB #ClaudeCode #AIEngineering #DeveloperTools #LLM
Claude Code Open Sourced with 140K Stars
More Relevant Posts
-
Manual PR reviews are a bottleneck. With Claude Code + GitHub Actions, you can: – Review every PR automatically – Detect bugs and missing tests – Enforce standards across repos Zero manual setup. Massive leverage. This is how modern teams scale engineering. Full video below 👇 https://lnkd.in/dgEsrA93 Website : www.systemdrd.com #ClaudeCode #GitHubActions #CodeReview #AIEngineering
To view or add a comment, sign in
-
Well, it's live! https://biomelab.dev/ I've building this project to make my workday more prolific. If you are into #agentic development workflows, using git worktrees for parallel task, you probably want to take a look at this project, which helps me in monitoring the ongoing work. Spoiler alert: it uses Docker, Inc Coding Agent #Sandboxes under the hood for adding safety to your workflows 😉
To view or add a comment, sign in
-
3 Claude Code agents. 1 repo. Chaos. Builds breaking. Files overwriting each other. Commits colliding mid-run. Turned out to be a coordination problem, not a code problem. Fix was 1 command: git worktree add One repo. 3 separate working directories. Each agent works in isolation. Build failures from collisions dropped to zero. Not the permanent architecture fix this deserves. But it unblocked me today. Now working on a Claude Code hook to detect parallel sessions and trigger worktree setup automatically at session start. Git worktrees have existed for years I just didnt need it before.
To view or add a comment, sign in
-
-
Planning is more important than coding... I run multiple Ralph Wiggum loops in parallel on the same project. Each one on its own git worktree, merging changes back in. Today I literally had 5 Claude Code instances running in parallel in the same repo. The catch? It only works if you give each agent serious clarity upfront. A few things I've picked up: - Hours perfecting the plan = hours saved debugging confused agents - Context management is everything. Agents lose focus fast when the window fills with noise - Parallel sessions only scale with tight scope and clear exit criteria The harness matters. But so does what you feed into it. Burn those tokens 😂 #AIEngineering #AgentAI #Agents #SoftwareEngineering
To view or add a comment, sign in
-
-
Most teams track test coverage. Fewer actually check it during code review. Part of the reason is friction. To see whether a merge request adds covered code, you historically had to leave the MR, open Teamscale or some other tool, and find the relevant data. By the time you get there, you've context-switched out of the review. With Teamscale 2026.3, GitLab merge requests can now display a Coverage of Changes badge directly in the platform. It shows the percentage of added or modified lines covered by tests, colored green or red based on the threshold you've configured in Teamscale. Enable it via the "Badge for Coverage of Changes" option in the GitLab connector settings. The badge doesn't replace a thoughtful review, but it puts the signal where the review actually happens. If coverage drops below your threshold, reviewers see it immediately, without an extra tab. Does your team currently check test coverage as part of code review, or does it happen separately?
To view or add a comment, sign in
-
-
The Claude Code Meetup runs every Wednesday at 1 PM Eastern inside the free Skool community. The format is live debugging, agent pattern review, and skill building with other developers who are actively shipping systems with Claude Code. A typical session looks like this: someone brings a real problem from their codebase, the group works through it on screen, and everyone walks out with a pattern they can use the same day. Recent topics have included MCP server configuration for Xano backends, skill file architecture for multi-agent systems, and the tmux setup for running agents from a phone. The problems are real, the repos are real, and the solutions ship that week. The meetup exists because the gap between knowing Claude Code exists and actually building production systems with it is not a documentation problem. It is a pattern problem. Developers need to see how other builders structure their skills, handle agent failures, and decide when to trust an agent with a long-running task. Reading docs gives you syntax; watching someone debug a broken agent loop live gives you judgment. The meetup is free, no pitch, no upsell. It sits alongside the Claude Code Mastery course, which covers the same material in structured, self-paced form. Both live inside skool.com/snappy. What agent pattern are you currently trying to figure out?
To view or add a comment, sign in
-
4% of all public GitHub commits are now written by Claude Code. 90% of Anthropic's own code is AI-written. But most engineers are still using it like a fancy autocomplete. I put together the power user guide I wish existed when I started. → The 5 core systems that separate 10x usage from expensive autocomplete → CLAUDE.md has a ~150 instruction budget before compliance drops. Most people blow past it. → Hooks are deterministic (100% execution). CLAUDE.md is advisory (~80%). Know which to use when. → Session forking: pre-warm a master session with 40K tokens of context, then fork per feature → Git worktrees for parallel agents without race conditions → MCP Tool Search saves 95% of your context window → Multi-agent orchestration patterns (tmux, containers, phone-based remote control) → Search tool decision tree: rg (20ms) → Serena (100ms) → ast-grep (200ms) → grepai (500ms) Sources: Anthropic official docs, awesome-claude-code, Trigger.dev, Builder.io, ykdojo/claude-code-tips, Cuttlesoft. Save this for your next deep session. ↓ What's the one Claude Code trick that changed your workflow the most? #ClaudeCode #AIEngineering #DevTools #AgenticCoding
To view or add a comment, sign in
-
If Claude feels wrong. You’re probably the problem. I've been using Claude Code daily for months now. One thing became obvious fast: When Claude gives you inconsistent results, it's almost always your setup. Not the model. Here's what I changed that made the difference: 𝗖𝗟𝗔𝗨𝗗𝗘.𝗺𝗱 Stop leaving it empty. Define your stack, architecture, and conventions. Claude reads it first every session. Treat it like onboarding docs for a new engineer. 𝗖𝗟𝗔𝗨𝗗𝗘.𝗹𝗼𝗰𝗮𝗹.𝗺𝗱 Keep your personal preferences out of the shared project file. Your workflow shouldn't break your teammates'. 𝗺𝗰𝗽.𝗷𝘀𝗼𝗻 Configure it once. GitHub, JIRA, Slack, databases. All connected, version-controlled. No more re-explaining your tooling every session. .𝗰𝗹𝗮𝘂𝗱𝗲/𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀.𝗷𝘀𝗼𝗻 Hooks for validation, linting, and blocking unsafe operations. PreToolUse and PostToolUse guardrails that run automatically so you catch problems before they ship. .𝗰𝗹𝗮𝘂𝗱𝗲/𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀/ Slash commands for recurring workflows. One keystroke runs your entire review or deploy process. No more copy-pasting the same prompt. .𝗰𝗹𝗮𝘂𝗱𝗲/𝘀𝗸𝗶𝗹𝗹𝘀/ Reusable workflows that load only when needed. Testing patterns, deploy checklists, API conventions. Auto-triggered by context or invoked with /skill-name. .𝗰𝗹𝗮𝘂𝗱𝗲/𝗮𝗴𝗲𝗻𝘁𝘀/ Specialized sub-agents with isolated context. Code review, security, and docs. Each has its own scope instead of one overloaded conversation. Your Claude Code setup is either working for your team or against it. Structure it once. Benefit from it every day after. What would you add to this list? #claude #claudeAI
To view or add a comment, sign in
-
-
Git worktrees let you run multiple Claude Code agents on the same repo at the same time. Each worktree is an isolated checkout of your repository on a separate branch. You open a Claude Code session in each one, assign a different task to each agent, and they work in parallel without stepping on each other. No stashing, no context switching, no waiting for one task to finish before starting the next. The setup takes two minutes. The productivity shift is immediate. Link to detailed blog in the comments. Have you tried this? #claudecode #gitworktrees #agentic
To view or add a comment, sign in
-
-
I told myself I'd write clean code this time. I lied. But at least repo-evolution-pipeline works... and it automates converting GitHub repos into mobile-ready implementations! 😂 You know that feeling when your boss says "just make it mobile-ready" and you're staring at 50 repos like 👁️👄👁️? Yeah, that was me. So I built a multi-agent pipeline that does the whole thing for you! 🤖 Multi-stage pipeline: discover → analyze → design → generate → verify → publish 🔧 Python 3.10+ | FastAPI | GitHub API | GitLab publishing 📊 Prometheus metrics because we love watching our agents sweat 🧪 Real verification: install, lint, type-check, test (because "it works on my machine" is not a QA strategy) The best part? Targeted repair loops when verification fails. Which means my agents gaslight each other until the code is good. Very normal. Very sane. 😅 If you've ever had to convert more than 3 repos to mobile and didn't build a pipeline, we need to talk. 👇 🔗 https://lnkd.in/dVPG59Sx #Python #MobileFirst #SoftwareEngineering #MultiAgent #GitHub #Automation #DevLife #CleanCode #AIEngineering #BuildInPublic
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
the power isn't 156 commands, it's encoding institutional knowledge into executable rules. most teams scatter this across slack threads and onboarding docs nobody reads. he made deviation impossible.