🤖 Built an AI-Powered Code Reviewer CLI using Node.js! Point it at any project folder and it automatically: ✅ Scans all your source files ✅ Actually runs your code to catch runtime errors ✅ Reads your git history for context ✅ Streams a full AI code review live in your terminal The interesting engineering part? It uses all 4 child_process methods in Node.js — each for a specific reason: → fork — offloads heavy file scanning to a separate Node.js worker → execFile — runs your code directly using the node binary (no shell) → exec — runs git commands to pull repo history → spawn — streams the Gemini AI response live to terminal No heavy frameworks. Just raw Node.js + Google Gemini API. 🔗 GitHub: https://lnkd.in/gUynjggP #nodejs #javascript #ai #buildinpublic #opensource #gemini
More Relevant Posts
-
Ever had to work with an API that has zero documentation? No docs. No source code. Just a black box that takes inputs and spits out outputs. I built ProtocolSense for that exact situation. Paste a few input/output examples and it tells you the hidden rules and logic running inside, with confidence scores and evidence .. in seconds. Once you have the rules, export them directly to TypeScript, Python, Zod schemas, or OpenAPI specs. Try at → protocolsense.com ---------- The backstory: It started as a Gemini API Developer Competition project. 2 weeks, built entirely inside Google AI Studio. Submitted and shipped. But I kept thinking about it. So I pulled it out of AI Studio and spent 3 days rebuilding it properly using: → Claude Code — handled auth, edge functions, refactoring → Groq — replaced Gemini for inference, the speed difference is night and day → Supabase — auth, database, edge functions Total time from idea to real product: under 4 days of actual work. If you've ever dealt with a legacy system or undocumented API, I'd love to hear how you handled it, and whether something like this would have helped.
To view or add a comment, sign in
-
-
Introducing my new project, "Diff Extractor," an AI-Driven Assistant that automates the process of analyzing code changes and generating a professional, conventional summary and commit message. 🤔 What is the problem that I'm trying to solve? When you make changes to your codebase, don't you need a clear summary of your changes? 🤨 Why not? With Diff Extractor, you can see what changes you have made, and if you want, you can also commit them. The Architecture 🏗️ 1️⃣ A lightweight terminal tool that interacts with the local Git environment. It extracts staged changes using "child_process" and securely communicates with the backend using the Node.js ecosystem. 2️⃣ A high-performance API that serves as the service layer. It handles complex data validation with Pydantic and manages the integration with Google Gemini 1.5 Flash. I implemented a specialized prompt engineering strategy that forces the LLM to return structured JSON, ensuring the backend can parse and deliver consistent commit messages and logic summaries using the Python ecosystem. This project is under development, but you can try it yourself by forking it into your GitHub account. You can find the full guide in the GitHub repos. 🟡 Client - https://lnkd.in/gYPqH7-Z 🟡 Backend - https://lnkd.in/g2fdSQ3C 🟡 API - https://lnkd.in/gbp8f93h 😄 Don't forget to add the API key in the .env file, and don't push the .env file into GitHub. #SoftwareEngineering #Python #FastAPI #NodeJS #GenerativeAI #GeminiAI #Git #FullStackDev #CleanCode
To view or add a comment, sign in
-
-
Reading a new codebase is one of the most underrated hard problems in engineering. You clone a repo. 40 files. No docs. You're lost for an hour before writing a single line. I built Codebase Explainer to fix that — a tool that takes any GitHub repo and gives you an AI-generated map of what it actually does. How it works: → Paste a GitHub repo URL → GitHub API fetches the file tree and source → Groq API (Llama) reads the code and generates plain-English explanations per module → D3.js renders an interactive graph showing structure and dependencies The interesting engineering problems: Context window management — You can't dump an entire repo into an LLM call. Had to design a chunking strategy: summarize files individually, then synthesize at the module level. Two-pass architecture. GitHub API constraints — Rate limits hit fast on public repos without auth. Built token-based auth handling to stay within limits without breaking the flow. D3.js with dynamic data — D3 is powerful and painful. Making the graph actually readable (not a hairball) with real repo data required intentional layout decisions, not just default force simulation. What this is really about: Most AI dev tools wrap GPT in a chatbox. This one produces a visual artifact — something you can navigate, not just read. That distinction shaped every design decision. What I'd add next: Cross-file dependency tracing. Right now it's file-level. Making it symbol-level (function calls, imports) would make it genuinely production-useful. Tech stack: Frontend: React + Vite Backend: Node.js + Express AI: Groq API (for code explanation/summarization) Visualization: D3.js (dependency/structure graphs) External API: GitHub API (repo fetching) Deployment: Render, Vercel GitHub: https://lnkd.in/gaXiVsK9 Live Link: https://lnkd.in/gsuEV73y #DevTools #React #D3js #AI #GroqAPI #FullStack #MERN #BuildInPublic #OpenSource
To view or add a comment, sign in
-
If you're still reading code the old way, I have news for you I've onboarded onto codebases with zero documentation more times than I'd like to admit. I grepped around, read the README (if it exists), and looked at past commits. That old approach is outdated. Here are 4 AI tools that help you understand codebases: 1️⃣ DeepWiki: Replace GitHub. com with deepwiki. com in any repo URL. You get auto-generated wiki docs, architecture diagrams, and an AI chat that actually knows the code. Built by the team behind Devin: deepwiki [dot] com 2️⃣ Code Wiki: Gemini-powered living documentation that regenerates after every commit. Every section hyperlinks to the actual code. The chat knows your entire repo end-to-end. Built by Google: codewiki [dot] google 3️⃣ GitSummarize: Turn any GitHub repo into a full documentation hub with summaries and high-level overviews. Built because the creators found it difficult to understand massive codebases when trying to contribute to open source. Free with rate limits: gitsummarize [dot] com 4️⃣ Code2Tutorial: Paste a GitHub URL, get a step-by-step tutorial walkthrough. Think of it as an AI-generated guided course for any repo: code2tutorial [dot] com Alternatively, just ask your favorite coding agent (e.g. Codex / Claude Code), it does a decent job of navigating your codebase. #frontenddeveloper #reactjsdeveloper #webdeveloper #developer #frontend #reactjs #nextjs #redux #ai -copied
To view or add a comment, sign in
-
-
Tired of coding the exact same way every day? Here is how I built my own AI-Powered Scaffolding Tool! 🚀 I don't like restricting myself to a single language when building projects. Since I work with multiple languages, writing boilerplate code (like Routers and Database designs) from scratch every time I start a new project is a huge hassle. To automate my workflow and make life easier, I built a Custom CLI Tool using my own Local LLM model. Here is how it works: 1️⃣ Setting up a Local LLM: I download my preferred LLM model locally inside my Ubuntu environment. 2️⃣ Creating Custom Templates: I pre-build base templates for the Router paths and DB structures that I frequently use, just for my convenience. 3️⃣ Running via Command Line: Just like running npm install, before I start a new project, I simply run my custom commands in the Terminal. 4️⃣ The AI Magic (Why do I need an LLM?): Instead of just copying and pasting a static template, the LLM scans the specific requirements and context of my new project. Then, it dynamically modifies the code structure of those base templates to perfectly match the project and injects them in! Not only did this significantly boost my development speed, but since the entire system runs locally, I don't have to worry about data privacy issues at all. Here is what it looks like in the Terminal (Mockup): Bash # Not the usual way, this is how I run my custom AI tool $ my-ai-scaffold init new-project 🤖 Local LLM Initialized... [✓] Scanning project data and requirements... [✓] Context Detected: Next.js + PostgreSQL + Python Backend [✓] Connecting to Base Templates... [✓] LLM modifying schemas and router paths dynamically... [✓] Injecting generated codes into /new-project 🎉 Project scaffolded successfully! Real Engineering isn't just about getting stuck with the standard tools everyone else uses; it's about building automations like this that perfectly fit your own architecture. Have you built any crazy automations or custom scripts to make your workflow easier? Let me know in the comments below! 👇 #SoftwareEngineering #AI #LLM #Automation #WebDevelopment #CodingLife #TechTrends #SriLanka #DevOps #Innovation
To view or add a comment, sign in
-
-
Here’s the truth no one tells you 👇 I just completed 30 Days of JavaScript on LeetCode. I didn’t magically become a “pro”. I struggled. I used GPT. I got stuck… a LOT. But something changed. I stopped seeing problems… and started seeing patterns 🧠 Here’s what I learned: Most problems repeat the same ideas Arrays, HashMaps, Two Pointers You don’t need to solve everything. You need to understand patterns Looking at solutions is NOT cheating. Blind copying is. Consistency beats motivation The real skill is thinking, not coding Code is just the output If you're starting LeetCode: Don’t aim to solve 1000 problems. Aim to understand 10 patterns deeply. That’s the real game. Next goal: Data Structures & Algorithms deeper 🚀 If you're on the same journey, let’s connect and grow together. - Ahmar Ali Khan
To view or add a comment, sign in
-
A very productive day wrapping up both frontend polish and backend infrastructure for CRag. We made some major leaps in how the application handles data ingestion and user experience. Here is what I tackled today: • 𝗦𝗰𝗼𝗽𝗲𝗱 𝗔𝗜 𝗖𝗵𝗮𝘁: Built the logic to let users chat with a specific document instead of querying the entire organization's knowledge base. I updated the 𝗠𝗼𝗻𝗴𝗼𝗗𝗕 𝘃𝗲𝗰𝘁𝗼𝗿 𝗮𝗻𝗱 𝗸𝗲𝘆𝘄𝗼𝗿𝗱 𝘀𝗲𝗮𝗿𝗰𝗵 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀 𝘁𝗼 𝗰𝗼𝗿𝗿𝗲𝗰𝘁𝗹𝘆 𝗳𝗶𝗹𝘁𝗲𝗿 𝗯𝘆 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁 𝗜𝗗. • 𝗦𝘁𝗮𝗹𝗲 𝗗𝗮𝘁𝗮 𝗖𝗹𝗲𝗮𝗻𝘂𝗽: Fixed a bug where re-uploading a document would mix 𝗼𝗹𝗱 𝗲𝗺𝗯𝗲𝗱𝗱𝗶𝗻𝗴𝘀 𝘄𝗶𝘁𝗵 𝗻𝗲𝘄 𝗼𝗻𝗲𝘀. The processing worker now properly wipes stale chunks before generating and storing fresh AI embeddings. • 𝗦𝘁𝗼𝗿𝗮𝗴𝗲 𝗥𝗲𝗹𝗶𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Resolved a 𝘁𝗿𝗶𝗰𝗸𝘆 𝗦𝟯 𝘂𝗽𝗹𝗼𝗮𝗱 𝗶𝘀𝘀𝘂𝗲 𝗰𝗮𝘂𝘀𝗶𝗻𝗴 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗿𝗲𝗳𝘂𝘀𝗮𝗹𝘀 𝗯𝘆 𝘀𝘄𝗶𝘁𝗰𝗵𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝗳𝗶𝗹𝗲 𝘀𝘁𝗿𝗲𝗮𝗺𝘀 𝘁𝗼 𝗯𝘂𝗳𝗳𝗲𝗿𝘀, making file storing much more stable. • 𝗕𝗲𝘁𝘁𝗲𝗿 𝗣𝗮𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻: Upgraded the document library tables to include items-per-page selection and a jump-to-page dropdown, fixing some underlying TypeScript metadata bugs along the way. • 𝗨𝗜 𝗮𝗻𝗱 𝗨𝗫 𝗣𝗼𝗹𝗶𝘀𝗵: Designed a new custom delete confirmation modal with a 𝗯𝗹𝘂𝗿𝗿𝗲𝗱 𝗯𝗮𝗰𝗸𝗴𝗿𝗼𝘂𝗻𝗱 𝗼𝘃𝗲𝗿𝗹𝗮𝘆 𝗮𝗻𝗱 𝗶𝗺𝗽𝗿𝗼𝘃𝗲𝗱 𝘁𝗵𝗲 𝗺𝗮𝗶𝗻 𝗹𝗮𝘆𝗼𝘂𝘁 𝘀𝗼 𝘁𝗵𝗲 𝘀𝗶𝗱𝗲𝗯𝗮𝗿 𝗮𝘂𝘁𝗼-𝗰𝗼𝗹𝗹𝗮𝗽𝘀𝗲𝘀 when clicking outside of it. Getting the vector search to accurately scope down to a single file took some debugging, but the results are incredibly accurate now. #buildinpublic #softwareengineering #webdevelopment #ai #rag #reactjs #nodejs #mongodb #learninpublic
To view or add a comment, sign in
-
-
The Great 24-Hour Extraction: How Anthropic’s "Source Map" Slip Changed the Game ( yes its April fools day but this story is real🙂) Yesterday, a simple human error in a build script did what competitors have been trying to do for a year: It revealed the "Special Sauce" of Claude Code. The Leak: A 59.8 MB Javascript source map was bundled into version 2.1.88 of @anthropic-ai/claude-code. This wasn't just minified code; it was a roadmap to ~1,900 TypeScript files covering everything from "Self-Healing Memory" to "Agent Swarm" logic. The "Clean Room" Counter-Move: What’s most impressive isn't the leak itself, but the speed of the reimplementation. • Sigrid Jin (@realsigridjin) used OpenAI’s Codex (via the oh-my-codex orchestrator) to perform a systematic rewrite. • By porting the logic to Python, they’ve created a "legal buffer"—a clean-room implementation that replicates the behavior and architecture (the "claw-code" repo) without infringing on the specific TypeScript copyright. • As of this morning, the project has already surpassed 50,000 stars on GitHub, making it the fastest-growing repo in history. The Engineering Takeaway: We are officially in the era of Instant Legacy. If your competitive advantage is "hidden code," you don't have a competitive advantage. The only thing that stays proprietary in 2026 is your compute and your live data. The logic? That belongs to the agents now. Anthropic tried to sell a "Security Review" tool, but their own packaging script was the ultimate security failure. The community didn't just look at the code—they ingested it. Is this the end of "Closed Source" developer tools, or just a really expensive lesson in .npmignore? The "Claw-Code" Python Port: Repository and Discussion 👉 GitHub - instructkr/claw-code: The fastest repo in history to surpass 50K stars ⭐, reaching the milestone in just 2 hours after publication. https://lnkd.in/dSV3VjCC #claudecode
To view or add a comment, sign in
-
-
Most developers write their AI assistant rules files once, by hand, and never touch them again. They're generic. They're stale. And if you use more than one AI coding tool, you're maintaining duplicates that slowly drift apart. I built @rulesgen/rulesgen to fix that. It analyzes your actual codebase — frameworks, dependencies, naming patterns, async style, test setup, even recent git history — and auto-generates optimized rules files for: ✅ Claude Code (CLAUDE.md) ✅ Cursor (.cursorrules) ✅ GitHub Copilot (copilot-instructions.md) ✅ Windsurf (.windsurfrules) All from a single command. All tuned to your specific project, not a boilerplate. Supports JS/TS, Go, Python, monorepos, Docker, Terraform, GitHub Actions — and 50+ frameworks out of the box. Get started: npx @rulesgen/rulesgen generate Open source. MIT licensed. Available on npm now. Would love feedback from anyone deep in the AI-assisted dev workflow 🙏 #AITools #DevTools #ClaudeCode #Cursor #GitHubCopilot #buildinginpublic #OpenSource
To view or add a comment, sign in
-
Anthropic accidentally leaked Claude Code's entire source code. 512,000 lines of TypeScript. A source map left in the npm package. Second time in a year. Within hours: 1,100+ GitHub stars. Thousands of developers picking it apart. I use Claude Code every day. I've looked through the leak. The code is clean. Strict TypeScript, Bun runtime, React + Ink for terminal UI. None of that's why I use it. The code tells you WHAT they built. It doesn't tell you WHY. Which features got killed. Which user complaints shaped the architecture. What they learned watching thousands of engineers use it daily. One detail stood out: their telemetry tracks when users swear at Claude. A frustration metric. That's not a code decision. That's a product decision. Someone watched users struggle and built a feedback loop around it. You can't copy that from a GitHub repo. I've seen this pattern on every project I've touched. Teams think their moat is their code. It never is. Code can be copied in weeks. Product intuition takes years. And code keeps getting cheaper to produce. AI just made it nearly free. If your competitive advantage lives in source files, you don't have one. The hardest part of Claude Code isn't writing 512K lines. It's knowing which 512K lines to write. Which features to ship, which to kill, and what problems users have that they haven't told you about yet. Product engineering is the moat. Not source code. Anthropic will strip the source map, push a patch, and keep shipping. The teams that should worry aren't the ones getting leaked. They're the ones with nothing worth leaking. #ProductEngineering #AI
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development