Built a small agent to stop burning tokens in Claude Code (and Cursor, Copilot, Codex). It's called lean-dev — one command sets up smart context management, auto-generates .claudeignore, tightens your CLAUDE.md, and switches models by task automatically. npx lean-dev init That's it. No config, no setup friction. Still early days — if you try it and find bugs, open a GitHub issue. PRs and ideas are very welcome too. https://lnkd.in/gQZwqVuz #ClaudeCode #AI #DeveloperTools #OpenSource
Build lean-dev agent for Claude Code
More Relevant Posts
-
Just shipped a major update to goodai-base - an open-source library of 48 reusable AI agent skills. Three new domains are live: 🔹 gproject: A 7-phase documentation pipeline (Discovery → Roadmap). It drives the full flow with human gates at critical decision points. 🔹 autodoc: Fully autonomous reverse-engineering. Parallel agents scan your codebase and synthesize system-level docs with zero human oversight. 🔹 review: 12 specialized reviewers (Security, Architecture, High-load, etc.) that replace generic prompts and auto-detect scope from your diffs. All skills use a unified severity system and work seamlessly with Claude Code, Cursor, Zed, and OpenCode. 👉 https://lnkd.in/d6RU5ev9 #AI #OpenSource #SoftwareEngineering #AICoding #Productivity
To view or add a comment, sign in
-
At Fountane, we build products fast. That pressure exposed a real problem with AI coding agents. They'd confidently write code for a codebase they barely understood. No warnings, no caveats — just wrong decisions that looked right until they broke something. So I built a fix: a skill you drop into Cursor, Claude Code, or any AI tool that reads markdown. Before your agent writes a single line, it scores itself: — How well does it understand your codebase? — What can it build autonomously right now? — What gaps exist, and what closes them? The real unlock wasn't better prompts. It was knowing the agent's confidence level before giving it work. A 60% understanding score means you're going to spend more time reviewing than building. A 90% score means you can actually delegate. We now run this before any major feature work. It's changed how we structure context, how we onboard agents to new repos, and how we catch blind spots early. Open source. Tool-agnostic. One command to install. If you enjoy thoughtful conversations with people building real products, this could be for you. Apply for an invite → https://lnkd.in/gZdbqS4J Link : https://lnkd.in/dB5Cb9Wp #ProductEngineering #AgenticAI #BuildingInPublic
To view or add a comment, sign in
-
I had a problem tracking when my coding agents are working on a long horizon tasks, it's basically a black box. You see the prompt go in and code come out, but what's actually happening underneath? How many agent skills got invoked? Which MCPs were called? How does token usage look across different agents on the same project? so I built tokentelemetry.com - an observability open source project to track what's going on with your coding agents (4 days of vibe coding + 416.5 million tokens of Claude Code + Gemini CLI) 🚀 It's a 100% local observability dashboard for your coding agents. No signup. No cloud. Your logs never leave your machine. Here's what it gives you: 🗂 Real-time token usage, cost estimates & session traces 🔴 Full reasoning + tool call waterfall — see exactly what your agent did and why 📊 Per-project insights — heatmaps, model leaderboards, agent distribution, mcps, agentskills ⚡ One command install — browser opens automatically Supports 9 agents out of the box: Claude Code, Gemini CLI, Codex, Cursor, GitHub Copilot, Qwen, OpenCode, Vibe & Antigravity. Built with FastAPI + Next.js. MIT open source. 🌐 https://lnkd.in/gZM_PR_D The era of flying blind with AI agents is over. Now you can actually observe them. Want to contribute? Fork it on GitHub — PRs are welcome! ⭐ https://lnkd.in/ggGecf9U #AIAgents #DeveloperTools #ClaudeCode #GeminiCLI #Observability #OpenSource #GenerativeAI #LLM #BuildInPublic #codingagents #vibecoding #mcps #agentskills #commands
To view or add a comment, sign in
-
I’m not someone who usually announces what I’m working on, but I’ve noticed that staying quiet is sometimes perceived negatively. It also seems to be the norm now to share what you’re exploring and learning—so here’s a start. 1. GitHub Codex Integration I provided a prompt to Codex via GitHub, and it generated an entire codebase autonomously. Observations: - The code appeared complete at first glance. - It followed the Page Object Model (POM) design pattern. - The structure was well-organized, including hooks, pages, runners, steps, and utilities. - The DriverManager implementation was thoughtful: - Included conditions for headless vs. headed execution - Handled browser window sizing - Disabled notifications - Proper driver teardown (quit) was implemented. However, there were notable gaps: - Base URL and login credentials were hardcoded - No environment configuration (e.g., .env or config abstraction) - Assertions were missing Key Learnings - A well-crafted, holistic prompt yields significantly better results than a minimal (zero-shot) prompt. - Blind trust in AI-generated code can be risky: - It may work fine for small projects - As complexity grows, issues can compound - Debugging becomes harder because the underlying logic wasn’t fully authored or internalized - This can lead to over-reliance on AI even for fixes, without truly understanding the changes being made.
To view or add a comment, sign in
-
Since the latest release of Docker, Inc sandboxes, it's possible to fully customize a sandbox with kits. A kit is a spec.yml file with commands to be executed in the sandbox, network policies to add by default, and secrets injection mechanism, among many other cool things! We opened https://lnkd.in/ecU7DEPx to create a vibrant ecosystem of kits to enrich the experience of using a sandbox. Please give it a try, and share your feedback, as we are more than happy to improve the experience of using them. #ai #devex #codingAgents #sandboxes
To view or add a comment, sign in
-
Everyone is excited that AI can ship code faster. Nobody talks enough about what breaks next: communication. The code gets done. But handoffs are still long, technical, and painful to read. So teams lose time on the same question: “cool… but what actually changed?” That gap is exactly why I built **Layman**. Layman takes raw AI coding output and rewrites it into plain-English updates that people can understand in seconds — not just engineers, but founders, PMs, support, and clients too and like Caveman it reduces 75% Token usage too Less confusion. Less back-and-forth. Faster decisions after every task. If you’re using Claude Code, Codex, Cursor, Windsurf, or Gemini, this will immediately improve how your team works with AI-generated changes. 👉 README: https://lnkd.in/eJ6HseMD
To view or add a comment, sign in
-
Okay, real talk: I thought Claude Code was just a fancier Copilot. Then I actually used it. This thing doesn't sit around waiting for instructions like an intern on their first day. It moves. Need it to dig through your codebase, run terminal commands, and edit files across your whole project at once? Done. You describe the goal; it maps the route. You're the GPS destination, not the driver. MCP servers are where your jaw drops a little. Plug in external tools, browsers, databases, and APIs, and Claude Code picks them up and uses them like it's always had them. It's not "AI plus tools bolted on." It's AI that actually has a toolbox. GitHub connectors mean it's not hiding in a tab somewhere while your real work happens elsewhere. It's in the PR. It's in the review. It's part of how the team ships, not a side quest. And then there are hooks, which honestly should be talked about way more. Imagine being able to whisper to Claude Code before it does anything: "Check this." "Always do that after," or "never touch this file." Enforce standards. Trigger tests. Build guardrails. It's your workflow, your rules. Claude Code just follows them. Four things. Tools, MCP servers, connectors, hooks. And suddenly you're not just using AI to code faster; you're using it to work smarter. There's a difference. A big one. 🙌 #ClaudeCode #Anthropic #AI #SoftwareDev #DevTools #Automation
To view or add a comment, sign in
-
Learn Claude Code properly, not randomly. 1 playlist. 13 videos. full system. Most people open Claude Code, try a few prompts, then quit. Not because it’s hard. Because there is no structure. This fixes that. Here’s the actual roadmap from the playlist: 1. Introduction and setup 2. System setup on your machine 3. Slash commands 4. Making real code changes 5. Context window management 6. Claude.md file 7. Spec-driven development 8. Plan mode and thinking 9. Custom slash commands 10. Skills 11. Subagents 12. GitHub workflows 13. Final usage patterns This is not random learning. This is how you move from trying Claude → to actually using it. What you will actually learn: → How to run Claude Code inside your system → How to control context instead of losing it → How to structure work using plan mode → How to use MCP, skills, and subagents properly → How to build real AI agents, not prompt chains ✦ This is layered learning Each video builds on the previous one If you want the full playlist: 1. Like this post 2. Comment “Learn Claude” 3. Connect with me, I will send it to your DMs
To view or add a comment, sign in
-
-
If your team is still merging PRs without AI in the loop, you're paying for hours you don't need to spend. Manual reviews. Bugs caught in production instead of pre-merge. Test cases written by hand at 11pm before a release. Every one of those is solvable in 2026. Here's the AI-enhanced GitHub pipeline I now recommend to every team 👇 1️⃣ Code pushed → GitHub 2️⃣ CI/CD triggered → GitHub Actions 3️⃣ AI reviews the PR → GitHub Copilot / CodeRabbit 4️⃣ AI suggests improvements → Claude / OpenAI 5️⃣ Tests auto-generated → Playwright / Testgen 6️⃣ Deploy if approved → Docker + GitHub Actions + Render/AWS AI at every step. Quality at every commit. What teams are seeing after the switch: ✅ 50% faster deployments ✅ 70% fewer bugs in production ✅ 60% less manual effort ✅ Cleaner, more maintainable code The takeaway: you don't need a bigger team. You need a smarter pipeline. Let AI handle reviews, fixes, and tests — and let your engineers focus on what only they can build. Build smarter. Ship faster. That's the whole game now. 🔔 Follow Umesh Kalia for more on AI, GitHub, and modern dev workflows. Which step would you automate first? Drop it in the comments 👇 #AI #GitHub #DevOps #Automation #SoftwareEngineering #CICD #DeveloperProductivity #GitHubActions #AITools
To view or add a comment, sign in
-
Your AI assistant is writing bad Go. And it's not the AI's fault. It's yours. Because you never told it the rules. I spent a long time re-typing the same corrections into Cursor like : "Use errors.Is, not == comparison." "Pass context as the first arg." "Don't ignore that error." and many more Same prompts. Same fixes. I had become a human prompt cache — paying the tax one keystroke at a time. Found this repo: 𝗴𝗶𝘁𝗵𝘂𝗯.𝗰𝗼𝗺/𝗺𝗵𝗺𝘁𝘀𝘇𝗿/𝗴𝗼-𝗴𝘂𝗶𝗱𝗲𝗹𝗶𝗻𝗲𝘀 A curated set of Go best practices — concurrency, error handling, performance, testing, common pitfalls — structured exactly the way an AI agent needs to consume them. Drop it into your project as a SKILL.md. Commit it to the repo. Done. Every teammate who opens the project in Cursor now gets the same AI context automatically. No onboarding doc. No "btw remember to tell it…" Slack messages. Before — re-teach the agent every chat. Watch it suggest the wrong pattern. Fix it in review. Repeat forever. After — write it once. The AI starts where you would. Code reviews stop being about style violations and start being about actual logic. If you're shipping Go with AI tools and haven't given your agent a guideline file yet — fix that today. 10 minutes to set up. Pays off forever. 🔗 https://lnkd.in/gYuNg3FX #golang #ai #cursor #softwareengineering #developerproductivity #backend
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development