Introducing my new project, "Diff Extractor," an AI-Driven Assistant that automates the process of analyzing code changes and generating a professional, conventional summary and commit message. 🤔 What is the problem that I'm trying to solve? When you make changes to your codebase, don't you need a clear summary of your changes? 🤨 Why not? With Diff Extractor, you can see what changes you have made, and if you want, you can also commit them. The Architecture 🏗️ 1️⃣ A lightweight terminal tool that interacts with the local Git environment. It extracts staged changes using "child_process" and securely communicates with the backend using the Node.js ecosystem. 2️⃣ A high-performance API that serves as the service layer. It handles complex data validation with Pydantic and manages the integration with Google Gemini 1.5 Flash. I implemented a specialized prompt engineering strategy that forces the LLM to return structured JSON, ensuring the backend can parse and deliver consistent commit messages and logic summaries using the Python ecosystem. This project is under development, but you can try it yourself by forking it into your GitHub account. You can find the full guide in the GitHub repos. 🟡 Client - https://lnkd.in/gYPqH7-Z 🟡 Backend - https://lnkd.in/g2fdSQ3C 🟡 API - https://lnkd.in/gbp8f93h 😄 Don't forget to add the API key in the .env file, and don't push the .env file into GitHub. #SoftwareEngineering #Python #FastAPI #NodeJS #GenerativeAI #GeminiAI #Git #FullStackDev #CleanCode
AI-Driven Code Change Summary Tool: Diff Extractor
More Relevant Posts
-
🤖 Built an AI-Powered Code Reviewer CLI using Node.js! Point it at any project folder and it automatically: ✅ Scans all your source files ✅ Actually runs your code to catch runtime errors ✅ Reads your git history for context ✅ Streams a full AI code review live in your terminal The interesting engineering part? It uses all 4 child_process methods in Node.js — each for a specific reason: → fork — offloads heavy file scanning to a separate Node.js worker → execFile — runs your code directly using the node binary (no shell) → exec — runs git commands to pull repo history → spawn — streams the Gemini AI response live to terminal No heavy frameworks. Just raw Node.js + Google Gemini API. 🔗 GitHub: https://lnkd.in/gUynjggP #nodejs #javascript #ai #buildinpublic #opensource #gemini
To view or add a comment, sign in
-
Ever had to work with an API that has zero documentation? No docs. No source code. Just a black box that takes inputs and spits out outputs. I built ProtocolSense for that exact situation. Paste a few input/output examples and it tells you the hidden rules and logic running inside, with confidence scores and evidence .. in seconds. Once you have the rules, export them directly to TypeScript, Python, Zod schemas, or OpenAPI specs. Try at → protocolsense.com ---------- The backstory: It started as a Gemini API Developer Competition project. 2 weeks, built entirely inside Google AI Studio. Submitted and shipped. But I kept thinking about it. So I pulled it out of AI Studio and spent 3 days rebuilding it properly using: → Claude Code — handled auth, edge functions, refactoring → Groq — replaced Gemini for inference, the speed difference is night and day → Supabase — auth, database, edge functions Total time from idea to real product: under 4 days of actual work. If you've ever dealt with a legacy system or undocumented API, I'd love to hear how you handled it, and whether something like this would have helped.
To view or add a comment, sign in
-
-
The Great 24-Hour Extraction: How Anthropic’s "Source Map" Slip Changed the Game ( yes its April fools day but this story is real🙂) Yesterday, a simple human error in a build script did what competitors have been trying to do for a year: It revealed the "Special Sauce" of Claude Code. The Leak: A 59.8 MB Javascript source map was bundled into version 2.1.88 of @anthropic-ai/claude-code. This wasn't just minified code; it was a roadmap to ~1,900 TypeScript files covering everything from "Self-Healing Memory" to "Agent Swarm" logic. The "Clean Room" Counter-Move: What’s most impressive isn't the leak itself, but the speed of the reimplementation. • Sigrid Jin (@realsigridjin) used OpenAI’s Codex (via the oh-my-codex orchestrator) to perform a systematic rewrite. • By porting the logic to Python, they’ve created a "legal buffer"—a clean-room implementation that replicates the behavior and architecture (the "claw-code" repo) without infringing on the specific TypeScript copyright. • As of this morning, the project has already surpassed 50,000 stars on GitHub, making it the fastest-growing repo in history. The Engineering Takeaway: We are officially in the era of Instant Legacy. If your competitive advantage is "hidden code," you don't have a competitive advantage. The only thing that stays proprietary in 2026 is your compute and your live data. The logic? That belongs to the agents now. Anthropic tried to sell a "Security Review" tool, but their own packaging script was the ultimate security failure. The community didn't just look at the code—they ingested it. Is this the end of "Closed Source" developer tools, or just a really expensive lesson in .npmignore? The "Claw-Code" Python Port: Repository and Discussion 👉 GitHub - instructkr/claw-code: The fastest repo in history to surpass 50K stars ⭐, reaching the milestone in just 2 hours after publication. https://lnkd.in/dSV3VjCC #claudecode
To view or add a comment, sign in
-
-
Shipped ShipIt-agent v1.0.0 An open-source Python agent runtime for building powerful, production-style agents with a clean API. What’s in it: - Multiple LLM support - AWS bedrock/OpenAI / Anthropic / Gemini / Groq / Together / Ollama via adapters - prebuilt tools for web search, open URL, workspace files, code execution, memory, planning, verification, artifacts, AskUser, human review - MCP support for remote tool discovery and tool execution - connector-style tools for Gmail, Google Calendar, Google Drive, Slack, Linear, Jira, Notion, Confluence, and custom APIs - session history, memory stores, trace stores, and structured streaming packets - notebook test flows for no-tools, multi-tools, MCP, connectors, AskUser, HILT, streaming, and reasoning Built so you can do things like: - create an agent with llm, tools, mcps, prompt, history - stream live runtime events and tool packets - plug the agent into chat products or internal workflows - inspect reasoning with visible planning / decomposition / synthesis / decision tools GitHub: https://lnkd.in/dpUiYqzF #python #ai #llm #agents #mcp #opensource #bedrock #toolcalling
To view or add a comment, sign in
-
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
Claude Code's source code is all over the news as a "major leak" this week. But the code was already public on npm the entire time. The source map added developer comments and project structure, but the actual CLI with all the system prompts and API keys? Already there in plaintext! What actually surprised me: we scanned GitHub.com and claude.ai with AfterPack's Security Scanner, an analysis tool for web apps I've built, and found the same class of exposure. Email addresses, internal URLs, env var names - all in production JavaScript. If a $60B company ships their most sensitive CLI with nothing beyond default bundler minification, it's worth checking what your production JS looks like too. https://lnkd.in/dWQqF7su or npx afterpack audit https://your-site.com
Claude Code's source didn't leak. It was already public for years. Anthropic's AI coding tool had a source map accidentally published to npm this week. VentureBeat, Fortune, Gizmodo all covered it as a major breach. A clean-room Rust rewrite hit 110K GitHub stars in a day - a world record. But here's what the coverage missed: the entire CLI - 13MB of JavaScript - was already sitting on npm in plaintext since launch. You could open it in your browser at any point. The source map just added developer comments on top of code that was never protected. We analyzed it at AfterPack. Parsed the file in 1.47 seconds and pulled out 148,000 string literals - system prompts, tool descriptions, env vars, telemetry events, even a DataDog API key. Then we pointed Claude at its own source and asked it to explain the code. It worked extremely well. The real question isn't about Anthropic specifically. It's that every JavaScript application ships code to production that AI can now read as easily as you read formatted code. Minification shortens variable names for smaller bundles - it was never designed to hide anything. We also scanned GitHub.com and claude.ai with our Security Scanner. Found email addresses and internal URLs in production JavaScript. Same class of exposure, zero headlines. Full analysis with technique comparison and scanner results: https://lnkd.in/dEw_dCBc Check what your site exposes: npx afterpack audit https://your-site.com
To view or add a comment, sign in
-
-
🚀 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 𝗶𝗻 𝗢𝗻𝗲 𝗣𝗼𝘀𝘁 🤖 Most beginners think building APIs is hard… But with FastAPI? 👉 You can build production-ready APIs in minutes Here’s everything you actually need 👇 🧠 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗙𝗮𝘀𝘁𝗔𝗣𝗜? 👉 A modern Python framework to build APIs 👉 Fast, simple, and production-ready Built using: • Starlette (backend engine) • Pydantic (data validation) ⚡ 𝗪𝗵𝘆 𝗘𝘃𝗲𝗿𝘆𝗼𝗻𝗲 𝗜𝘀 𝗨𝘀𝗶𝗻𝗴 𝗜𝘁 1️⃣ Less code, fewer bugs 2️⃣ Auto-generated API docs 3️⃣ Built-in validation 4️⃣ Async support (high performance) 5️⃣ Type-safe (Python hints) 🛠️ 𝗕𝘂𝗶𝗹𝗱 𝗬𝗼𝘂𝗿 𝗙𝗶𝗿𝘀𝘁 𝗔𝗣𝗜 Python from fastapi import FastAPI app = FastAPI() @app.get("/") def home(): return {"message": "Hello FastAPI"} Run it: Bash uvicorn main:app --reload 🔑 𝗖𝗼𝗿𝗲 𝗖𝗼𝗻𝗰𝗲𝗽𝘁𝘀 1️⃣ Path Parameters 👉 /user/101 2️⃣ Query Parameters 👉 /search?title=AI 3️⃣ Request Body 👉 Send full data using models 🧩 𝗣𝗼𝘄𝗲𝗿𝗳𝘂𝗹 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 ✔ Pydantic models (data validation) ✔ Dependency Injection ✔ Background tasks ✔ Middleware support 🚀 𝗗𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 (𝗥𝗲𝗮𝗹 𝗪𝗼𝗿𝗹𝗱) Run in production: Bash gunicorn -k uvicorn.workers.UvicornWorker main:app --workers 4 Deploy on: ☁️ AWS ☁️ Render ☁️ Railway 💡 𝗥𝗲𝗮𝗹𝗶𝘁𝘆 𝗖𝗵𝗲𝗰𝗸 👉 Learning FastAPI ≠ enough 👉 Building APIs people can USE = real skill 🔥 𝗙𝗶𝗻𝗮𝗹 𝗧𝗵𝗼𝘂𝗴𝗵𝘁 FastAPI is not just a framework… 👉 It’s the fastest way to go from idea → API → product 💬 Are you still learning Python… or building real APIs? If this helped you: 👉 Like, Comment & Repost 👉 Follow for more dev content #FastAPI #Python #BackendDevelopment #APIs #SoftwareEngineering #WebDevelopment #TechCareers #LinkedinLearning 🚀
To view or add a comment, sign in
-
-
When building APIs today, performance, developer productivity, and maintainability are not optional — they’re critical. That’s where FastAPI stands out. After working with several backend stacks, FastAPI consistently proves why it’s one of the most efficient choices for modern API development. Here are the key advantages that make FastAPI different: 1. Extremely High Performance FastAPI is built on Starlette and Pydantic, making it one of the fastest Python frameworks available — comparable to Node and Go in many benchmarks. You get async performance without sacrificing readability. 2. Automatic Data Validation (Goodbye Boilerplate) Thanks to Pydantic models, request bodies, query params, and responses are automatically: Validated Parsed Documented Typed You write less code and get more reliability. 3. Automatic Interactive Documentation Out of the box, FastAPI generates Swagger and ReDoc documentation powered by OpenAPI. Your API is self-documented from day one. No extra setup. No extra libraries. 4. Designed for Type Hints (and it shows) FastAPI leverages Python type hints to the fullest. This means: Better IDE support Fewer bugs Clear contracts between frontend and backend Easier testing and refactoring 5. Faster Development Time Less boilerplate, automatic docs, built-in validation, and clean structure mean you ship features faster — with fewer mistakes. 6. Built-in Support for Modern Auth OAuth2, JWT, security dependencies — all supported natively and cleanly. You don’t fight the framework to implement secure APIs. 7. Testing Becomes Simpler Because of dependency injection and typing, writing tests becomes straightforward and predictable. 8. Clean Architecture Friendly FastAPI encourages separation of concerns and scales very well as projects grow. It doesn’t force bad patterns. It enables good ones. If you’re starting a new backend project in Python and not considering FastAPI, you’re probably adding unnecessary complexity. FastAPI lets you focus on business logic instead of fighting the framework. #FastAPI #Python #BackendDevelopment #APIs #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗖𝗹𝗮𝘂𝗱𝗲 𝗖𝗼𝗱𝗲 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 𝗵𝗮𝘀 𝗹𝗲𝗮𝗸𝗲𝗱. The irony is almost too perfect: when publishing npm packages, someone at Anthropic appears to have made a very expensive mistake. Alongside the obfuscated cli.js, the public package also included a full cli.js.map file, which absolutely was not supposed to be there. Which means one simple thing: 𝗮𝗻𝘆𝗼𝗻𝗲 𝘄𝗵𝗼 𝗶𝗻𝘀𝘁𝗮𝗹𝗹𝗲𝗱 𝗼𝗿 𝗱𝗼𝘄𝗻𝗹𝗼𝗮𝗱𝗲𝗱 𝘁𝗵𝗲 𝗽𝗮𝗰𝗸𝗮𝗴𝗲 𝗰𝗼𝘂𝗹𝗱 𝗿𝗲𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁 𝘁𝗵𝗲 𝗼𝗿𝗶𝗴𝗶𝗻𝗮𝗹 𝘀𝗼𝘂𝗿𝗰𝗲 through the sourcemap without much effort. After that, the internet did what the internet always does: The code spread across repositories almost instantly, and several well-known infosec communities confirmed that this was not a fake and not just a thin wrapper around an API, but a genuinely sophisticated CLI platform. 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: https://lnkd.in/dNptTPk6 The scale is impressive too. 1,906 TypeScript files and roughly 500,000 lines of code. Some of the more interesting details: • there are hints of unreleased features like 𝗱𝗲𝗲𝗽 𝗽𝗹𝗮𝗻𝗻𝗶𝗻𝗴, 𝗽𝗲𝗿𝘀𝗶𝘀𝘁𝗲𝗻𝘁 𝗺𝗲𝗺𝗼𝗿𝘆, 𝗮𝗻𝗱 𝗲𝘃𝗲𝗻 “𝘀𝗹𝗲𝗲𝗽” • you can inspect how Anthropic seems to have implemented multi-agent orchestration, for example in coordinator/coordinatorMode.ts • 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗮𝗿𝗲 𝗮𝗹𝘀𝗼 𝗲𝘅𝗽𝗼𝘀𝗲𝗱, including constants/prompts.ts For anyone building agentic tooling, this is a rare chance to look under the hood of a very serious product and study the actual engineering, not the marketing layer. 𝗛𝗮𝗽𝗽𝘆 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝗱𝗮𝘆, 𝗜 𝗴𝘂𝗲𝘀𝘀.
To view or add a comment, sign in
-
I would easily pay $1,000 for this tool. But its free. Creators call it Skill Seekers - The data layer for AI systems. This tool just automated the most painful part of building with Claude Code, turning any documentation source into a ready-to-install AI skill with automatic conflict detection built in. Skill Seekers does deep AST parsing on Python, JavaScript, TypeScript, Java, C++, and Go repositories, extracts every function, class, and method with parameters and types, then cross-references them against the documentation to find exactly where the docs lie and where the code has moved on without updating anything. It runs as an MCP server with 26 tools, so you can just tell Claude Code to scrape a GitHub repo, detect conflicts, merge the sources, and package the skill without touching the CLI once. The three-stream GitHub architecture splits repos into Code, Docs, and Insights streams and includes issues, labels, stars, and forks as weighted signals for better topic routing. Repo link in 1st comment
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development