If you use AI coding assistants like GitHub Copilot, Cursor, or Claude Code, you’ve likely hit the "𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗪𝗮𝗹𝗹." The AI tries to help, but it often lacks a deep understanding of how a change in one file ripples through the rest of your system. It either reads too much (wasting tokens and money) or reads too little (missing critical dependencies). This week for Finding AI Useful, I’ve been looking at code-review-graph a tool that changes the way LLMs "see" your code. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Standard AI tools use basic search to find relevant snippets. But software isn't just text; it’s a web of connections. If you change a data schema in your backend, the AI needs to know exactly which frontend components and API routes are impacted. 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: code-review-graph builds a local knowledge graph using Tree-sitter. It maps out functions, classes, and calls to create a "Structural Map" of your codebase. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗴𝗮𝗺𝗲-𝗰𝗵𝗮𝗻𝗴𝗲𝗿 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄: 🔹 𝗣𝗿𝗲𝗰𝗶𝘀𝗲 𝗖𝗼𝗻𝘁𝗲𝘅𝘁: It identifies the "blast radius" of any change. The AI only reads the files that are actually affected, leading to an 8x+ reduction in token usage. 🔹 𝗟𝗼𝗰𝗮𝗹 & 𝗣𝗿𝗶𝘃𝗮𝘁𝗲: Everything runs on your machine via SQLite. No code ever leaves your environment to build the index. 🔹 𝗠𝗼𝗻𝗼𝗿𝗲𝗽𝗼 𝗥𝗲𝗮𝗱𝘆: It’s built to handle thousands of files, filtering out the noise and focusing only on the logic that matters. 🔹 𝗠𝗖𝗣 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: It uses the Model Context Protocol, meaning it can plug into various AI editors to provide "graph-aware" suggestions. Check it out here: 👉 h͟t͟t͟p͟s͟:͟/͟/͟g͟i͟t͟h͟u͟b͟.͟c͟o͟m͟/͟t͟i͟r͟t͟h͟8͟2͟0͟5͟/͟c͟o͟d͟e͟-͟r͟e͟v͟i͟e͟w͟-͟g͟r͟a͟p͟h #FindingAIUseful #SoftwareDevelopment #GitHubCopilot #AI #Productivity #Coding #OpenSource
Code-Review-Graph Improves AI Coding Assistants
More Relevant Posts
-
Uber recently burned through their entire annual AI budget on AI productivity tools. That news stuck with me. Because if a company with Uber's engineering muscle can blow past a year's worth of AI spend that fast, what does that mean for everyone else quietly bleeding tokens on every single AI coding session? Around the same time, I came across a reel explaining compression algorithms, the kind of thing most developers know exists but never think to apply here. And the idea clicked. Every time Claude Code, Cursor, or Cline reads your project for context, it reads everything. Comments that explain the obvious. Boilerplate. The same five imports repeated across ten files. You are paying tokens for noise that the model doesn't actually need to understand your code. So I built TokenZip as an initial attempt. It compresses your code before it reaches the model, replacing repeated patterns with short references and including a lookup table (codebook) at the top so the AI can still read it perfectly. Your logic, variable names, and structure stay untouched. The noise disappears. Real results on actual projects: - Spring Boot microservice: 24-33% token reduction - Python CLI tool: 26% token reduction - One concrete test: 2,208 tokens down to 1,263 Savings compound with project size. The more files, the more repeated patterns, the more you save. It works as a CLI, as an MCP server that plugs natively into Claude Code, Cursor, and Cline, and as a Python API. Supports 15+ languages. The Uber story is an extreme case. But the underlying problem : that AI tooling costs scale brutally with context size, is something every dev team is going to feel sooner or later. This is my attempt at a practical fix. https://lnkd.in/ga2UYbhB If you're working in this space or have thoughts on the approach, I'd genuinely love to hear them. Anthropic OpenAI Google Meta #AIEngineering #DeveloperTools #OpenSource #LLM #ClaudeCode #BuildInPublic
To view or add a comment, sign in
-
Everyone expected one AI coding tool to win. That’s not what’s happening. In the first week of April, Cursor shipped version 3.0 with a dedicated Agents Window for running multiple agents at once. OpenAI published a Codex plugin that runs inside Anthropic’s Claude Code. Developers started running all three together — and it actually works. Not as competitors. As layers. If you’ve worked in production engineering, you’ve seen this pattern before. Nobody runs a single observability tool. You use Prometheus to collect metrics, Grafana to visualize them, and PagerDuty to wake you up at 3 AM when something breaks. Each tool does one thing well. The value comes from how they compose. AI coding tools are splitting the same way: Cursor sits at the IDE layer. It’s where you orchestrate — open files, switch contexts, manage multiple agents working in parallel. Claude Code sits at the terminal layer. It reads entire codebases, runs tests, commits changes, manages pull requests. The Pragmatic Engineer’s February survey of 906 engineers found it had the highest “most loved” rating at 46%. SemiAnalysis estimates it now produces around 4% of all public GitHub commits. OpenAI Codex sits at the autonomous execution layer. 3 million weekly active users now, up from 2 million a month ago. Each one is best at a different thing. Together they cover the full loop: plan → write → review → ship. The interesting part isn’t which tool is “winning.” It’s that the developers who learn to compose all three are pulling far ahead of the ones still picking a favorite. Same as it ever was in software. The advantage isn’t the tool. It’s knowing how to wire tools together. #AICoding #ClaudeCode #Cursor #DeveloperTools #SoftwareEngineering
To view or add a comment, sign in
-
-
Most AI coding tools today — whether it’s GitHub Copilot or Cursor — still rely on re-reading chunks of your code and sending them to an LLM every single time. That approach starts breaking down as the codebase grows. I have been building something different — a system where your codebase becomes active memory. And even in its current experimental stage, the difference is already visible: → ~58–63% hit rate without any LLM calls → ~73% context coverage — meaning it retrieves not just one file, but the surrounding system Compare that to typical retrieval approaches (including what most tools rely on), which often hover much lower on both precision and coverage. What this means in practice: ⚡ More relevant context surfaced instantly 🧠 Better understanding of how parts of the system connect 🎯 Less noise, more actionable code 💸 Zero token cost for retrieval Instead of: “Search some files → hope the model figures it out” This becomes: “Jump directly to the right part of the system → with its context already attached” Still improving ranking quality, but the core is working: High-quality context retrieval without LLM dependency Feels like a shift from AI that scans code → to systems that actually know where things are #AI #ArtificialIntelligence #MachineLearning #GenAI #DeveloperTools #SoftwareEngineering #Coding #AIForDevelopers #CodeAI #DevTools #StartupBuildInPublic #BuildInPublic #TechStartup #Innovation #DeepTech #AIStartup #ZeroLLM #NoLLM #TokenEfficiency #AICostOptimization #ScalableAI #AIInfra #AIArchitecture #CodeSearch #CodeUnderstanding #AIForCode #Copilot #CursorAI #CodeAssist #GraphAI #KnowledgeGraph #ActiveMemory #ContextEngineering #AIReasoning #RetrievalSystems #FutureOfAI #NextGenAI #AIRevolution
To view or add a comment, sign in
-
-
Your AI coding assistant re-reads your entire codebase on every single task. Nobody talks about how wasteful that actually is. code-review-graph fixes this. It builds a structural map of your code with Tree-sitter, tracks changes incrementally, and feeds your AI only the files that actually matter — via MCP. I ran it on a 27,700-file Next.js monorepo. It trimmed review context down to ~15 files. That's 49x fewer tokens. On a smaller 500-file project, the initial build took about 10 seconds. Every update after that? Under 2 seconds. The way it works is genuinely clever. It maps every function, class, and import into a graph. When something changes, it traces the "blast radius" — all the callers, dependents, and tests touched by that change. Your AI reads that targeted set instead of scanning everything blindly. Setup is literally 3 commands: pip install code-review-graph code-review-graph install code-review-graph build That's it. It auto-detects Claude Code, Cursor, Codex, Windsurf, Zed, and more — writes the right MCP config for each one automatically. Supports 23 languages including Jupyter notebooks. Updates on every git commit without you touching anything. Fair warning — impact accuracy is at 0.54 F1 so it won't catch every single dependency edge. But recall is 100%, meaning nothing critical slips through. For most teams the token savings alone make this a no-brainer. 10.4k stars in a short time. The community clearly agrees. Worth trying this weekend → https://lnkd.in/gmAtGWhu Drop a comment if you've already tried it — curious what repos you've tested it on 👇 #AI #SoftwareEngineering #ClaudeCode #DeveloperTools #OpenSource #MachineLearning #Coding #LLM #MLOps #DevTools
To view or add a comment, sign in
-
-
The battle for who writes your code is officially on. The Verge published a sharp piece today: OpenAI, Google, and Anthropic are not competing on chatbots anymore. They are racing to own the software development workflow. Code was the earliest proven "killer app" for AI. Code is well-documented, easy to test, and there is a mountain of training data. You can run the output and immediately know if it works. What started as autocomplete has turned into tools that can build entire applications from a description. Cursor, GitHub Copilot, Claude Code, Windsurf... the space is suddenly very crowded. The interesting question is not which tool wins. It is what happens to the software industry when writing code costs close to nothing. Full piece: https://lnkd.in/dzTEweEA
To view or add a comment, sign in
-
I got tired of copy-pasting AI prompts across 16 different coding assistants. So I built a tool to sync them all from one single config file. Here's the problem it solves: If you use Claude Code, you need a `CLAUDE.md`. If you switch to Cursor, you need `.cursorrules`. If you use Copilot, you have to copy it again to `.github/copilot-instructions.md`. In a week, they all drift apart. Your AI tools start giving inconsistent code because you updated one rule file and forgot the others. The solution: AIRules. You write one `.airules.yml`. You run: `npx @tangvu/airules init` And it automatically detects your tech stack (Next.js, Python, Rust, etc.) and generates the optimized rules files for 16+ AI assistants (Claude Code, Cursor, Copilot, Windsurf, Qwen Code, Gemini...). Features: - Smart Detection — auto-detects 30+ frameworks and 10 languages to set the right best practices - Multi-Tool Sync — generates rules for 16+ tools automatically - Score Card — grades your rules setup (S/A/B/C/D) to show how AI-friendly your repo really is - Zero Config — works out of the box, just run one command Whenever you want to change a rule (e.g. "Use server components by default"), you update one file and run `airules sync`. That's it. Every AI agent you use is now on the same page. One config to rule them all. Check it out on npm and GitHub: https://lnkd.in/gdj52eew How many different AI coding assistants are you currently jumping between? #OpenSource #AI #DeveloperTools #BuildInPublic #Productivity
To view or add a comment, sign in
-
🚀 A Single CLAUDE.md File Just Hit #1 on GitHub Trending — 44K Stars in 7 Days Most people try to fix AI coding assistants with new tools. But this repo solved it with one markdown file. No plugins. No setup. No dependencies. Just clear rules that stop LLMs from making the mistakes developers hate most. 👇 Karpathy pointed out common AI coding problems: → Making wrong assumptions silently → Overengineering simple tasks → Editing code nobody asked to change → Acting without clarifying goals Someone turned those lessons into CLAUDE.md — a behavior guide for Claude Code. 4 Rules Inside the File 1 → Think Before Coding → If requirements are unclear, ask questions → Don’t guess and run with one interpretation → Surface tradeoffs before coding 2 → Simplicity First → Write the minimum code needed → Avoid unnecessary abstractions → If 200 lines can be 50, simplify it 3 → Surgical Changes → Only modify what the task requires → Don’t refactor unrelated code → Don’t remove comments you don’t understand 4 → Goal-Driven Execution → Turn vague requests into measurable outcomes → Example: “Add validation” = write failing tests, then fix them Why It Went Viral Because developers want AI that: → Writes cleaner code → Makes smaller PRs → Asks better questions → Stops guessing intent One file. Immediate results. Drop it in your project root and Claude follows it from the first task. Link to the repo 👉 https://lnkd.in/dDt-G_4e AI won’t replace good engineers. But engineers who know how to guide AI will move faster than everyone else. Save this for later. Repost ♻️ if you believe prompting is becoming a real engineering skill. #AI #GitHub #SoftwareEngineering #Developers #Coding #Productivity #Tech
To view or add a comment, sign in
-
-
🤖 AI is now writing 51% of all code on GitHub. Let that sink in for a second. According to the latest Stack Overflow Developer Survey, 84% of developers are either already using AI coding tools — or planning to. Tools like GitHub Copilot, Cursor, and Claude Code have gone from "cool experiment" to actual workflow in under 2 years. And the numbers are wild: → The AI coding tools market hit $12.8 BILLION in 2026 (up from $5.1B in 2024) → AI-assisted dev cycles are 25–50% faster → 90% of devs regularly use at least one AI tool at work → Cursor is reportedly raising $2B at a $50B+ valuation But here's what nobody talks about: A controlled study found that AI tools made experienced devs 19% SLOWER — while those same devs felt 20% faster. The confidence boost is real. The blind trust? Dangerous. This isn't about replacing developers. It's about developers who USE AI replacing those who don't. At CDN IGNOU, this is exactly why we focus on hands-on, practical workshops — so you're not just reading about these tools, you're building with them. 💬 Are you using AI coding tools in your workflow? What's your experience been? Drop it in the comments 👇 Follow CDN IGNOU for workshops, events & resources that keep you ahead of the curve. 🚀 #AITools #DeveloperCommunity #CDNIgnou #GitHub #Copilot #MachineLearning #Coding #Workshop #Delhi #TechEducation #DevLife
To view or add a comment, sign in
-
-
Cursor makes ambiguity cheap… and bugs even cheaper. ⚠️ The first time you use **Cursor** in a real codebase, it can feel like “AI pair programming.” In reality, it’s more like giving an eager intern access to a warehouse of unlabeled boxes 📦 If your repo has: ✅ half-migrated frameworks ✅ competing patterns ✅ “legacy” folders that still look alive ✅ TODO graveyards …Cursor will confidently optimize the wrong reality. Where Cursor *actually* changes teams isn’t typing speed. It’s process. You end up winning only when you force clarity: 🔍 tighter scope (what’s *actually deployed*) 🧹 delete dead paths 🧾 ADRs that mark what’s authoritative 👤 clear ownership for AI-generated code 🧪 stricter reviews + linting Otherwise you’re not accelerating engineering. You’re accelerating ambiguity. My takeaway: treat AI like a truth auditor, not a code generator. Build a “Reality Index” so Cursor can ground answers in real signals (CI, Argo, Terraform state, prod logs) instead of repo vibes. If you’re building on **Webflow**, shipping content on **Webflow**, or marketing developer tools with **Webflow**, this is the same lesson: speed is great—until you scale the wrong source of truth. Read the news item here: https://lnkd.in/d96mktGn #Cursor #AI #DevOps #PlatformEngineering #Webflow #SoftwareEngineering #DeveloperTools
To view or add a comment, sign in
-
-
Work Smarter, Not Harder: The AI Coding Revolution 🚀 Are you still writing every line of code manually? The shift from "Manual Coding" to "AI-Assisted Development" isn't just about speed—it’s about staying in the flow state. By pairing GitHub Copilot with Cursor, we're moving from being "writers" to "architects." THE POWER COUPLE 🛠️ 🔹 GitHub Copilot: Think of it as a super-powered autocomplete. It learns your style and predicts the next 10 lines of code, handle-baring the repetitive boilerplate so you don't have to. 🔹 Cursor: The first editor built around AI. It doesn't just suggest lines; it understands your entire codebase. You can ask "Where is the auth logic?" or "Refactor this module to use Clean Architecture," and it executes in seconds. THE SHIFT IN ACTION 🔄 📍 The Old Way (Manual) Spend hours on boilerplate and repetitive imports. Constant context-switching between your IDE and Google. Debugging by trial, error, and dozens of print statements. 📍 The New Way (AI-Assisted) Boilerplate generated instantly via natural language prompts. Ask your editor questions directly about your code—no more Tab-searching. AI-powered error fixing that explains why the bug existed and how to prevent it. THE BOTTOM LINE 💡 AI isn't replacing developers; it's replacing the parts of development that feel like chores. This allows us to focus on what really matters: System Design, Logic, and Problem Solving. Are you team Cursor, Copilot, or both? Let's discuss in the comments! 👇 #SoftwareEngineering #CursorAI #GitHubCopilot #CodingTips #AI #Programming #CleanCode #DeveloperExperience
To view or add a comment, sign in
-
Explore related topics
- AI Tools for Code Completion
- AI Coding Tools and Their Impact on Developers
- How to Use AI Code Suggestion Tools
- GitHub Code Review Workflow Best Practices
- How Knowledge Graphs Improve AI
- How to Boost Productivity With AI Coding Assistants
- How AI can Improve Coding Tasks
- How to Use AI to Make Software Development Accessible
- How to Use AI for Manual Coding Tasks
- Solving Coding Challenges With LLM Tools
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development