I used to paste my entire codebase into an LLM every time I hit an error. Not because I wanted to. Because I was scared of missing context. Too little code → AI doesn't have enough to help. Too much code → AI gets lost in the noise. So I built Context Excavator. It reads your Python project and pulls out just the skeleton- which files exist, what's inside them, how they connect. One clean map instead of 400 messy lines. Three ways to use it: No error? It scans your architecture and flags risky functions Have an error? Paste it directly — agent traces exactly where it's coming from Don't want to type? It detects your error automatically from a log file Now when I get an error I paste the map, not the code. The LLM knows exactly where to look. Tech: Python, AST, Groq API (Llama 3.3 70B), Pathlib, Subprocess, Argparse, Markdown Shipped it as an open source CLI tool. If you've ever felt like you're fighting your AI assistant instead of working with it this might help. GitHub: https://lnkd.in/gGP3qxJs #Python #LLM #DevTools #OpenSource #BuildInPublic #AI #SoftwareDevelopment
Hits the exact pain point I’ve had with LLMs - AST‑driven skeleton map brings the right balance for first call. Excited to see it open‑sourced.
Great Project ✨✨
Extremely useful project. Great work Yamini!
This is a really cool project as just yesterday I told it to solve the error that was in logs but it had to search like a list to go through everything, if this were a chrome plug in how great it can be