Your AI code reviews are burning tokens they don't need to. Every time Claude reviews your code, it re-reads the entire codebase. 200 files. 150,000 tokens. For a change that touched 8 files. That's not smart. That's expensive. code-review-graph fixes this. It's an open-source tool that builds a persistent, incremental knowledge graph of your codebase using Tree-sitter and SQLite. Instead of dumping your entire repo into Claude's context, it sends only the changed files plus every file impacted by those changes. The result? 5 to 10x fewer tokens per code review. Before: 200 files scanned, ~150k tokens used. After: 8 changed + 12 impacted files, ~25k tokens used. Here's what makes it practical for real engineering teams: Works natively with Claude Code via MCP (Model Context Protocol). No extra setup, no new workflow to learn. Increments intelligently. After the first build (~10s for 500 files), subsequent updates take under 2 seconds. Only re-parses what changed. Understands blast radius. It traces dependency chains so Claude knows not just what changed, but what else that change could break. Supports 12+ languages out of the box: Python, TypeScript, JavaScript, Go, Rust, Java, C#, Kotlin, Swift, Ruby, PHP, and C/C++. Needs no external database. SQLite is all it takes. The architecture is clean: Tree-sitter parses your code into an AST, a SQLite + NetworkX graph stores the relationships, git diff drives incremental updates, and 8 MCP tools expose everything to Claude Code. Three review workflows ship with it: /code-review-graph:build-graph /code-review-graph:review-delta /code-review-graph:review-pr Whether you're a junior engineer just getting into AI-assisted development or a senior architect thinking about LLM cost optimization at scale, this tool addresses a real problem: context window efficiency. AI code review should be precise. Not brute-force. Check it out: https://lnkd.in/giHvG8pR #AIEngineering #ClaudeCode #LLM #TokenOptimization #CodeReview #OpenSource #DeveloperTools #SoftwareEngineering #MCP #GenAI
Optimize Claude Code Reviews with Code-Review-Graph
More Relevant Posts
-
Claude Code's Source Code has been leaked and it's breaking the internet! It's not just an API wrapper of Claude but a tool with multi-level architecture, showing us a very high bar for shipping AI coding tools. So how did this happen? It's Source Maps! Source maps are used for debugging and usually the code that is shipped is minified and compressed to be more abstract. However, source maps map and connect the bundled code back to the original source. NPM accidentally published the source map that effectively shipped the entire source code in human readable format. How to prevent your apps from this mistake? - Audit your NPM before every release using "npm pack --dry-run" - Never include source maps in production packages - Don't overlook .gitignore before pushing changes to production Malicious actors and developers can now better understand the data flow of Claude rather than brute-forcing through prompts injections. Developers can better understand "Claude Code's four-stage context management pipeline and craft payloads designed to survive compaction" (source). The source code was quickly taken down by Anthropic but some were lucky enough to see it before it was ;) Source: https://lnkd.in/g_UYpWfG
To view or add a comment, sign in
-
I asked Claude to review 3 lines of code. It read 2,900 files. Turns out Claude Code re-reads your entire codebase on every single task. Like that one colleague who skims the whole email thread just to reply "noted." I found a fix: code-review-graph https://lnkd.in/gWZPtbPm What it does, in plain English: It parses your codebase once using Tree-sitter, builds a structural knowledge graph in a local SQLite database, and from that point on, every git commit triggers a diff. Only changed files get re-parsed. A 2,900-file project re-indexes in under 2 seconds. Instead of Claude reading everything like it's cramming for an exam, it queries the graph and reads only what actually matters. The numbers are hard to ignore: 1.6.8x fewer tokens on code reviews 2.Up to 49x fewer tokens on daily coding tasks Setup takes three commands: pip install code-review-graph code-review-graph install code-review-graph build After that, Claude uses the graph automatically. You don't change how you work at all. I've been burning tokens like someone who just discovered AWS free tier. This fixes that. If you're using Claude Code regularly and haven't tried this, it's worth 5 minutes of your time. https://lnkd.in/gWZPtbPm 🥰🥰 #ClaudeCode #AI #DeveloperTools #SoftwareEngineering #Productivity #LLM #AITools #CodingLife
To view or add a comment, sign in
-
If long text is hard for you, I can generate a PDF with no fluff, just bullet points and everything you love) I noticed that AI agents during refactoring tend to lose closing braces or drop functionality that wasn't on the surface and was hard to read (of course this only happens in nightmares, in the real world everyone writes perfectly documented code, right?) I went to Père Fouras (aka Opus) and asked: "if there was a language that was comfortable and clear for you, and that smaller models could also understand and use to write lightweight APIs without piles of boilerplate and abstractions, what would you build?" The answer looked like it genuinely knows what it wants. It outlined a few key points: - Errors should be data, not "undefined is not a function (anonymous) at line 1847 of bundle.min.js". - Every function must start with an intent: what it does (and you can't skip it). Sure, you could write anything there, but that's human factor. An agent won't write "John Doe" in an intent block. - Effects without the fine print at the bottom of the contract: a function writes needs db if it touches the database. If it generates random values - needs rng. No surprises when you're reading a 5-line function and don't suspect it's quietly hitting three microservices and writing to a database. - Pipelines instead of spaghetti. Instead of "JSON.stringify(users.filter(u => u.active).sort((a,b) => b.created - a.created).slice(0, 10))" just users | filter where .active | sort by .created_at descending | take first 10. Reads like a sentence, even if you're seeing this language for the first time. I wasn't using all my API limits anyway, and someone took the power cord from my PlayStation, so I put on my product manager costume that pretends to be doing something useful and asked it to start building this language. It named the language itself as PACT, as in a pact between agents. Fun fact: in Cyrillic script, PACT reads like RUST. Probably not a coincidence, given it chose to write the compiler in Rust. What exists today: - interpreter with a deep type checker (generics, effects, struct fields) - HTTP server with SSE streaming, JWT auth - SQLite that auto-creates tables from struct fields - LSP server: real errors and autocomplete right in VS Code - MCP server with 5 tools: an AI agent connects and checks its own code, reads docs, runs tests - Docker image, one-curl install single ~5MB binary, zero runtime dependencies. Why even Llama could write backend on this: a model with bash access gets the loop "write → pact check (which validates not just types, but intents, effects, and contracts) → see where the error is → fix → pact test → it works". And for models that support MCP it's even simpler, the agent connects directly and gets 5 tools with zero configuration. This is an experiment, not production. But a working experiment with a full toolchain. 🔗 https://lnkd.in/dsTX8knC
To view or add a comment, sign in
-
🚀 Built & Open-Sourced: A Production-Ready LLM Workflow Engine in Python Over the past few months, I kept running into the same problem: ➡️ LLM apps don’t fail at prompts. They fail at orchestration. What starts simple quickly becomes: – multi-step pipelines – retry logic everywhere – branching flows – zero visibility into execution So I built something focused on that layer. protokol-core 👉 https://lnkd.in/dfA5wZWq 💡 What this actually solves: • Orchestrating multi-step LLM workflows cleanly • Managing retries, failures, and branching logic • Keeping execution fully explicit (no hidden state) • Designing systems that are easy to debug and extend ⚙️ What makes it different: • Minimal, dependency-light core • Composable workflow primitives • Deterministic execution model • First-class support for nested flows / subflows • Clear separation of logic vs execution • Built for real-world systems, not toy demos 🧠 Key idea: LLMs are not the hard part anymore. Workflow + control flow + reliability is. And most frameworks abstract that away… until you actually need control. 🎯 The gap: Most people can build: “prompt → prototype” ✅ Very few can reliably build: “multi-step system → production” That’s where things break. This project is focused entirely on that transition. If you’re working on: • AI platforms • LLM infra • Developer tools • Production AI systems You’ll probably relate to this. Try it. Break it. Extend it. Curious to hear how others are solving this. ⭐ https://lnkd.in/dfA5wZWq #Python #AIEngineering #SystemDesign #LLM #BackendEngineering #BuildInPublic
To view or add a comment, sign in
-
🌿 There's a file missing from your codebase. And your AI coding assistant is suffering for it. Every developer using Claude Code, Cursor, or GitHub Copilot has hit the same wall. You open a project. The AI starts making suggestions. They're generic. It doesn't know your naming conventions. It doesn't know the /legacy folder is frozen. It doesn't know you use Vitest, not Jest. So you explain it. Again. Every session. Every new tool. Every new team member. There's a fix. It's called AGENTS.md. It's a plain Markdown file that lives in your repo and tells AI coding agents exactly how to behave in your codebase. Think of it as a README — but written for AI, not humans. Here's what goes in it: ## Architecture - Business logic lives in /src/services — never in controllers - Use the Repository pattern for all database access ## Code Style - TypeScript strict mode — no `any` types - Functional components only — no class components ## Do Not - Modify files in /legacy — frozen pending migration - Write raw SQL — use the ORM query builder That's it. Plain Markdown. No schema. No tooling. No installation. Why it matters: → 60,000+ open-source projects have already adopted it → Works with Claude Code, GitHub Copilot, Cursor, Devin — all read it automatically → Governed by the Linux Foundation — this isn't going away → Supports monorepos with hierarchical file resolution → Zero barrier to entry — if you can write a README, you can write an AGENTS.md The core insight is elegant: Your README is written for humans. Your AGENTS.md is written for AI. These are completely different audiences with completely different needs. Getting started takes 5 minutes: Create AGENTS.md in your project root Write your architecture rules, code style, and what NOT to do Commit it — your AI tools pick it up automatically Iterate when you notice your AI making the same mistake twice The best AGENTS.md is the one that exists. Start simple. Add more as you discover what your AI agent needs to know. We wrote a full breakdown covering architecture, monorepo support, Linux Foundation governance, starter templates for Python/Django, Node.js/React, and Rust — plus what's still missing from the ecosystem. 👉 https://lnkd.in/gEs6hjQF 🔗 consulting.anablock.com #AIAgents #DeveloperTools #ClaudeCode #GitHubCopilot #Cursor #OpenSource #SoftwareEngineering #AIAssistants #DevTools #LinuxFoundation #AGENTS #CodingStandards
To view or add a comment, sign in
-
-
🚨 DRAMA ALERT 🚨 Anthropic just accidentally open-sourced Claude Code. Twice. Yes, you read that right. The company that called their CLI "secret sauce" and sent hundreds of DMCA takedowns to protect it... published source maps in their npm package. Again. Here's what happened and why it matters for every developer using AI coding tools. 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗺𝗮𝗽𝘀? When you ship JavaScript, you typically minify and obfuscate it. Source maps let you reverse that process for debugging. They contain the FULL original source code. Anthropic included these in their npm release. Oops. 𝗧𝗵𝗲 𝗶𝗿𝗼𝗻𝘆 𝗶𝘀 𝗱𝗲𝗹𝗶𝗰𝗶𝗼𝘂𝘀 → Claude Code ranks 39th on Terminal Bench. Dead last among harnesses using Opus. (I love it and it's still my favourite) → The "secret sauce" actually references Open Code's source to match their scrolling behaviour → Competitors copying Anthropic? Nope. Anthropic copying open source projects. 𝗛𝗶𝗱𝗱𝗲𝗻 𝗳𝗲𝗮𝘁𝘂𝗿𝗲𝘀 𝗱𝗶𝘀𝗰𝗼𝘃𝗲𝗿𝗲𝗱 • Dream Mode: Background agents that review past sessions and consolidate memories while you sleep • Coordinator Mode: Spawn multiple parallel workers with shared prompt cache • Kairos: Always-on proactive Claude that monitors your work and can push PRs automatically • Buddy: A Tamagotchi companion that hatches in your terminal (was scheduled for April 1-7) • Undercover Mode: Flag for Anthropic employees to hide that they're using Claude Code in public repos That last one is... interesting. Why hide it? 𝗪𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝗔𝗻𝘁𝗵𝗿𝗼𝗽𝗶𝗰 𝗱𝗼? The cat's out of the bag. DMCAs won't put it back. 1. Announce a roadmap actually to open source it 2. Let engineers talk about these features publicly 3. Stop the legal theatre 4. Be humans, not lawyers The company that markets itself as the "human" AI lab needs to act like it. Repo fork (while it lasts): https://lnkd.in/gYX8hx9v What's your take? Should all AI coding tools be open source? #ClaudeCode #AIEngineering #OpenSource #SoftwareEngineering #DevTools
To view or add a comment, sign in
-
AI coding agents burn most of their context window just navigating your codebase. I built a tool that fixes this. Every time an agent needs to understand a function, it takes 5-6 tool calls of grep and read loops. It has no dependency awareness, no memory of project structure, and rediscovers your architecture from scratch every session. I built codesight to solve this. It's a Go CLI that uses tree-sitter to parse your code (Go, TypeScript, Python, C#, Rust, Java, JavaScript) and generates a .codesight/ folder of structured Markdown files that serve as a knowledge layer between your code and AI agents. WHAT IT GENERATES Each package gets its own MD file with extracted API surfaces (full function signatures with file:line references), type definitions with fields and methods, a bidirectional dependency graph (imports + imported-by), and linked test files. On top of that, it generates PRD-style feature specs with requirement checklists derived from actual code, and a symbol-level changelog that tracks what changed between syncs. BENCHMARKS (1,943-file .NET monorepo) "How does Login work?" went from 41K chars across 6 calls to 3K chars in 2 calls (92% reduction) "All auth endpoints?" went from 84K+ chars and 10+ calls to 3K chars in 1 call Reverse dependency queries ("what calls this?") are instant. With grep they're effectively impossible. Search latency: 0.37s vs 11.4s HOW SYNCING WORKS codesight hashes file contents and only regenerates MDs for packages with actual changes. Each MD has two zones: tree-sitter owns the top half (API surface, types, deps) and regenerates it on sync. The bottom half (architecture notes, usage examples, gotchas) is preserved across syncs, so nothing written by an agent or human gets overwritten. CLAUDE CODE INTEGRATION "codesight init" wires up SessionStart and PostToolUse hooks in .claude/settings.json. The agent gets project status on every session start and the vault auto-syncs after every git commit. It also generates a skill file so the agent knows how to use search, task, and status commands out of the box. The core idea: agents don't need to read raw source files to reason about your system. They need package-level abstractions with enough detail to trace dependencies and understand boundaries, without drowning in implementation. That's the level tree-sitter lets you extract reliably. Open source. https://lnkd.in/dSG9FU6V #OpenSource #DeveloperTools #AI #ClaudeCode
To view or add a comment, sign in
-
From intent to deep code: How using GitHub spec‑kit to live the doctrine. Don’t just talk about the four pillars; eat our own dog food. Building a zero‑dependency Haversine CLI calculator to demonstrate Deep Code (math from primitives, educational errors). But the real story is how we built it: using spec‑kit (https://lnkd.in/eR8A9sni.) Here’s what spec‑kit is offering: 🧠 Intent Code first Writing spec.md, plan.md, and tasks.md before a single math.sin(). That’s not paperwork; it’s a machine‑readable contract between product, engineering, and AI agents. 🧱 Foundational Code ready spec‑kit auto‑generated pyproject.toml, agent instructions, and even a constitution.md. The substrate outlives the app. ⚙️ Deep Code made visible With a clear task list (37 tasks, 15 parallelisable), focusing entirely on implementing the haversine formula with comments, tests, and zero hidden magic. 🕳️ Void Coding respected spec‑kit never forced us to over‑specify. We left gaps (altitude? batch mode? i18n?) as deliberate voids; invitations for future exploration. The result? A production‑ready CLI tool that teaches spherical geometry, runs in seconds, and has essential test coverage. All while following a doctrine. 👉 If you care about stable, observable, aligned systems; try spec‑kit. It turns "intent" into executable tasks, not wishful thinking. 🔗 Germaneering Blog: https://lnkd.in/eMT7Trna 🔗 Repository: https://lnkd.in/eP3H_qCn What’s your experience with spec‑driven development? Have you used spec‑kit or similar tools? Let’s discuss in the comments.
To view or add a comment, sign in
-
Godspeed — open-source Claude Code plugin: S0-S5 tier routing + multi-agent orchestration (69% exact classifier, parallel Sonnet workers, one-command install, 17 skills, MIT) Shipped a Claude Code plugin I've been building called **godspeed**. Open-source routing classifier and multi-agent orchestrator that wires into Claude Code's hook API without forking anything. **The problem:** running complex workflows on Opus is a cost trap. A lookup doesn't need Opus. A rename doesn't need Opus. But most people pay Opus for everything because the routing overhead of deciding otherwise is itself friction. I measured my own usage and ~30-50% of my Opus spend on complex workflows was routing waste. **What godspeed does:** 1. Classifier scores every prompt S0-S5 in ~5ms (keyword + regex, zero API call). **69.0% exact on a 200-prompt held-out eval. +31.5pp over the best naive baseline.** 2. S0-S2 → Haiku or Sonnet. S3+ dispatches **Zeus** — an orchestrator that decomposes the task into parallel Sonnet workers following Anthropic's MARS orchestrator-worker pattern. 3. Every synthesis is critic-gated by **Oracle** against a 10-point rubric. Only PASSing answers get written to memory. 4. Memory is a 3-tier vector-embedded store (**Mnemos**): hot context / warm SQLite / cold archive, with progressive disclosure on reads. 5. Hook latency: ~90ms warm / ~160ms cold. Node.js fastpath on the hot path, Python for everything else. **Stack:** stdlib Python (no deps in core) · sqlite3 · one Node.js hook · Anthropic API via Claude Code's Agent tool. **What ships in v2.2.0:** - 17 skills (namespaced `godspeed:<name>` in Claude Code) - 3 slash commands (`/brain-score`, `/godspeed-info`, `/godspeed-settings`) - 7 lifecycle hooks (UserPromptSubmit, PostToolUse, PreCompact, SubagentStop, SessionEnd) - Cross-OS (Linux, macOS, Windows) — Python detection fallback, LF line endings enforced - MIT license, fork and extend **Install (one command inside any Claude Code session):** /plugin marketplace add itsribbZ/godspeed /plugin install godspeed@itsribbZ-godspeed Alt-path for `~/.claude/skills/` install is `bash install.sh` after cloning. **Reproduce the accuracy claim:** repo ships `toke/automations/brain/eval/brain_vs_baselines.py` with `golden_set.json` so the 69% is verifiable end-to-end on your own machine. **Repo:** https://lnkd.in/ghBzn6r4 Happy to answer anything about the classifier design, the Zeus → MUSES dispatch pattern, or the install flow. Built for Anthropic's *Built with Opus 4.7* hackathon (Apr 21). ---
To view or add a comment, sign in
More from this author
Explore related topics
- Code Review Best Practices
- GitHub Code Review Workflow Best Practices
- Best Practices for Using Claude Code
- Solving Coding Challenges With LLM Tools
- How to Improve Your Code Review Process
- How to Boost Developer Efficiency with AI Tools
- LLM Applications for Intermediate Programming Tasks
- Affordable LLM Solutions for Coding Automation
- Optimization Strategies for Code Reviewers
- Lightweight LLM Solutions for Knowledge Graph QA
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development