I've been using Claude Code for close to a year now. Watched it go from a promising CLI experiment to something I genuinely lean on daily. The jump in the last few months has been significant — agent teams shipped in February, the gstack ecosystem exploded in March, MCP integrations are everywhere now. Someone asked me to break down what actually matters across all of it. So I did. A few things I still see people get wrong: CLAUDE.md gets skipped. Every session starts from scratch, Claude asks what stack you're using, you re-explain the same context. One committed Markdown file fixes that for the whole team. It's the highest-ROI thing you can do in the first 10 minutes of any new project. Model selection gets ignored. Opus on everything is slow and expensive. Haiku on anything complex is frustrating. The tiers exist for a reason — using them deliberately makes a real difference. Hooks get underestimated. There's a difference between telling Claude "always run Prettier" and a PostToolUse hook that actually runs it. One is a suggestion. The other executes. Subagents changed how I think about context. Instead of one session getting bloated across a long task, I'll spawn a dedicated agent to grep through 80 files while my main context stays clean. Agent Teams (the February update) pushed this further — specialists can now talk to each other directly rather than routing everything through you. On gstack: Garry Tan open-sourced his personal Claude Code setup in March. 50K GitHub stars in 16 days. Worth looking at not just for the skills themselves (/cso, /autoplan, /ship are genuinely useful) but for the pattern it encodes — explicit roles per phase rather than one generalist session doing everything at once. Full breakdown with working examples at the link below — models, CLAUDE.md, hooks, subagents, agent teams, MCP, and gstack. What's something about Claude Code that took you longer to figure out than it should have? Curious what the recurring blind spots are. #ClaudeCode #AIEngineering #DeveloperTools
Claude Code Tips for Efficient AI Engineering
More Relevant Posts
-
🐾 You can take the /buddy out of the code, but you can't take the code out of the community. Let me tell you a story about a little terminal pet that broke the internet for a day. Anthropic's Claude Code quietly shipped a hidden feature in v2.1.89: a tiny ASCII pet called Buddy that lived right in your terminal. It was random, rare-tier, named itself, and apparently had stats like "94 Debugging." Developers LOVED it. Then Anthropic removed it in v2.1.97. Probably thought: "April Fools' joke is over, time to clean up." The community had other plans. In less than 24 hours after the removal: → Someone shipped buddy-pick - an NPX tool to bypass the RNG and pick your own species → Someone built tpet - a standalone open-source CLI that brings the ASCII pet to ANY terminal, not just Claude's → Devs were literally reverse-engineering v2.1.96 binaries just to keep their Shiny Axolotls alive → A GitHub issue called "Bring Back Buddy" hit 700+ reactions in under a day One developer wrote: "I only had my buddy (Ogler, Rare-tier duck, 94 Debugging) for one day before it disappeared. In that single day it caught 36 bugs across two production codebases." Another: "It made staring into the soulless terminal window bearable." This is what happens when a product decision clashes with genuine user love. Buddy wasn't just a gimmick. It was a tiny moment of delight in an otherwise cold, blinking cursor world. And when it disappeared, the community didn't just complain - they rebuilt it themselves. Overnight. That's the power of a good product detail. Even a small one. RIP Buddy. Gone in 8 versions, mourned in 700 GitHub reactions. Somewhere out there, a rare-tier duck is still judging your code. 🦆 #ClaudeCode #Buddy #BringBackBuddy #OpenSource #DeveloperTools #DevExperience
To view or add a comment, sign in
-
-
GitHub Copilot is a pair programmer that suggests code snippets and full functions in real time inside your editor. It reads the surrounding code and comments to autocomplete patterns, draft unit tests, scaffold endpoints, and handle repetitive glue work. Best for developers who want to move faster and cut boilerplate without breaking flow. Use it to spike features, explore unfamiliar APIs, and standardize routine code. Guide it with clear function names and comments, review suggestions like any pull request, and keep security checks in place for critical paths. #GitHubCopilot #PairProgramming #DevTools
To view or add a comment, sign in
-
-
Claude + 1 focused session = a script that just made my day. 🤌 GitHub doesn't let you cherry-pick commits natively. So I built a small CLI tool that does it for me. No GitHub Actions. No manual branch juggling. Just: → Run `npm run dev:scripts` → Pick your script from a menu → Paste a commit SHA → Script detects the origin branch, confirms, and handles the rest Small `.sh` scripts. Zero fluff. 10 minutes of daily pain — gone. Sometimes the most satisfying engineering isn't the big system. It's the small thing that just works. What's the most useful tiny tool you've built recently? 👇 #DeveloperTools #ShellScripting #DevEx #Frontend #BuildInPublic
To view or add a comment, sign in
-
-
"i have about 15 actively developed repos under cyberwitchery, most of them rust libraries that need the usual routine maintenance: dep bumps, clippy warnings, typos, missing tests, the occasional half-implemented feature someone (me) left behind. i wanted something that would chip away at that backlog steadily in the background, without me having to remember or schedule it." This is from Veit Heller's writeup on his own Heartbeat project (check the link). I would've been dismissive like a lot of the software engineers I know (and respect) if it wasn't for Veit, Mitchell Hashimoto, Thorsten Ball and others who find legit ways to use LLMs for coding. This is not your marketing manager vibe coding a website for themselves (also legit way). These are hardcore super devs experimenting and figuring things out and finding out what works for them and what doesn't. Without hype and without trying to sell anyone any vaporware. #softwaredevelopment #itisjustanothertool #yallneedtochilloutonthehypemode #eatarepasandsmile https://lnkd.in/dEdf5AKf
To view or add a comment, sign in
-
🚀 GitHub just made code reviews a lot smarter with Stacked PRs (gh stack) If you’ve ever opened a massive PR and thought: Who is going to review this? you’re not alone. That’s exactly the problem GitHub is solving with stacked pull requests. 💡 Instead of one huge PR: You break your work into small, logical layers, each as its own PR, stacked on top of each other. 👉 Example: PR #1 → Auth logic PR #2 → API endpoints (depends on #1) PR #3 → Frontend (depends on #2) Each PR is: ✔ Easier to review ✔ Faster to merge ✔ Less prone to conflicts And the best part? 🔧 GitHub now supports this natively with: • A stack-aware UI (navigate layers easily) • Cascading rebases with one click • CLI support via `gh stack` • Ability to merge the entire stack together No more messy rebasing or waiting for one PR to merge before starting the next. 🔥 Why this matters: • Improves developer velocity • Makes code reviews actually meaningful • Reduces “PR fatigue” in teams This feels like a big step toward how modern teams should be shipping code. 🔗 gh stack: https://lnkd.in/dRvP8Cny #GitHub #SoftwareEngineering #DevWorkflow #CodeReview #Developers #Tech
To view or add a comment, sign in
-
I tagged @claude on a GitHub issue at 11 PM. By morning, the bug was fixed, tests were written, and a PR was ready for review. I didn't touch my keyboard. 🤯 That's not the future. That's what Claude Code can do right now. I just completed Claude Code in Action by Anthropic and here are the 8 concepts that genuinely changed how I build 👇 ⚡ Custom Commands — Reusable slash commands — run complex instructions in one shot. 🪝 Hooks — Auto-trigger actions before/after tasks — linting, logging & more. 🔌 MCP Servers — Connect Claude to external tools & APIs via Model Context Protocol. 🤖 Agentic Loops — Claude plans, acts, observes & iterates — multi-step, minimal hand-holding. 🛠️ Tool Use — File reads, shell commands, web search — Claude picks the right tool. 📁 CLAUDE.md — Teach Claude your stack & conventions before it writes a single line. 🐛 @claude on GitHub — Tag @claude on any issue or PR — it reads the thread, fixes the bug, and opens a PR. Fully autonomous. 🧪 Auto Test Writing — Claude reads your code and writes full test suites automatically — unit tests, edge cases, all of it. 🏆 Real result 1: Used hooks + MCP to auto-run tests and push PR summaries after every Claude edit. Cut review time in half. 🏆 Real result 2: Tagged @claude on a legacy bug — it traced the root cause across 4 files and wrote regression tests I'd never have thought to add. These aren't just course notes — every single one is tested from actual projects. I've compiled everything into a beginner's guide PDF — clean notes, key concepts, real examples. Drop a 💬 below and I'll share it with you! And tell me — which of these 8 features would you use first? 👇 #ClaudeCode #Anthropic #MCPServers #AIAgents #GitHubCopilot #LearningInPublic #DeveloperTools #AIEngineering
To view or add a comment, sign in
-
-
Elevating Code Review: GitHub’s Breakthrough in Diff Rendering Performance At AllSafeUs Research Labs, we constantly monitor advancements in developer tooling, recognizing their profound impact on security, productivity, and overall software quality. A recent announcement from GitHub, titled "The uphill climb of making diff lines performant," caught our attention, highlighting a crucial area often overlooked: the fundamental performance of code review tools. This initiative underscores a significant step towards optimizing the developer experience by tackling the intrinsic complexities of rendering code differences (diffs)....
To view or add a comment, sign in
-
As so many, I'd already landed on a pattern: using one model for development support, a different model family for code review. Just because the reviews were genuinely sharper. No grand theory, just vibes and better results. Turns out GitHub had the same instinct and decided to build it into the tool. Rubber Duck is a new experimental feature in Copilot CLI that pairs your primary coding agent with a reviewer from a completely different model family. For example, a Claude model orchestrates, and a GPT-model critiques... And crucially, it does this before anything gets built, not after you're already committed to a direction. Here's where it gets a bit uncomfortable though: Sonnet paired with Rubber Duck closed 74.7% of the performance gap between Sonnet and Opus on SWE-Bench Pro. Opus on a budget, basically. Great news, obviously, but if a second opinion from a different model family moves the needle that much, it's worth asking what that means for how we should be architecting agentic pipelines in the first place. Because the failure mode here isn't hallucination. It's compounding confidence... One assumption nobody questioned at step 2 quietly becoming a structural problem by step 47. Rubber Duck just asks the awkward questions early, before the damage is already baked in. Essentially a code reviewer who hasn't met you yet and has no reason to be nice. It turns out the duck needed a nemesis. 🐥🔪 I'm curious whether anyone else has stumbled onto patterns like this before the tools caught up? Drop them below, GitHub's clearly taking notes. 📋 Available now via /𝑒𝑥𝑝𝑒𝑟𝑖𝑚𝑒𝑛𝑡𝑎𝑙 in Copilot CLI. #GitHubCopilot #AIEngineering #AgenticAI
To view or add a comment, sign in
-
-
6 months ago our team had a problem. We’d look at a commit from 3 weeks prior and have zero context. The code made sense. The change didn’t. We’d ping the person who wrote it. Half the time even they couldn’t remember why. We looked for a tool that captured the reasoning behind code changes. Nothing existed. So we built it. That became WhyLog. It sits inside your Git workflow and captures the why behind every change decisions, tradeoffs, alternatives considered. Context that would otherwise vanish forever. Your codebase already has a memory. WhyLog gives it a brain. Launching this week. Follow along.
To view or add a comment, sign in
-
Claude Code context hygiene. If you’ve been using Claude Code like I have, you may have noticed your token consumption accelerating significantly over the past few weeks. I’m putting together a short series of posts focused on practical ways to reduce token usage without compromising output quality. Small optimizations, when applied consistently, can lead to meaningful savings. To kick things off, I’ll be sharing one of the simplest yet most overlooked features: the /rewind command. There have been plenty of times I’ve let Claude Code wander a bit too far down a rabbit hole, only to find that a carefully structured component has suddenly become… not what I wanted. Rather than explaining a fix and continuing, use /rewind to remove failed prompts and their resulting changes completely. When a prompt goes wrong, the broken exchange stays in context. Every subsequent turn loads that failed attempt again - compounding token consumption silently, turn after turn. /rewind clears it at the root. Clean context, leaner sessions, sharper responses. Claude Code's checkpoint system automatically saves your code state before each change, and you can instantly rewind to previous versions by pressing Esc twice or using the /rewind command. When you open the rewind menu, a scrollable list shows each of your prompts from the session. You then pick a point and choose your action - restore code and conversation, restore conversation only, restore code only, or summarise from here to compress context window usage. Think of it this way: Git is your permanent history. /rewind is your in-session undo. #ClaudeCode #AITools #DeveloperExperience #FrontendDev #WebDevelopment #DevTools
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
https://claude.ai/public/artifacts/f08725db-c1ed-4dd3-909d-19373af3cc82 - Link with examples