Ever merged the wrong PR into your GitHub repo? 😅 I’ve done it… and it’s painful. That’s why I built Aivana - an AI powered GitHub PRs reviewer that catches bugs, leaks, and risky changes before you hit merge. Now No more last-minute surprises. No more broken builds. What it does: - Connects to your GitHub repos via OAuth - Analyzes every PR diff using Google's Gemini AI - Flags bugs, security issues, performance problems, and code smells - Gives each PR a risk score (0-100) so you know what needs attention first - Shows feedback right alongside your code changes in a visual diff viewer The tech stack: Next.js 16, TypeScript, tRPC, Prisma, PostgreSQL, Better Auth, Inngest for background jobs, and shadcn/ui for the UI. Built it end-to-end with type safety in mind. Why I made this: I've lost count of how many times I've merged something I shouldn't have. Aivana is my attempt at making sure that number stops growing. It's not about replacing human review it's about having a second pair of eyes that never gets tired. Still iterating on it, but pretty happy with how the risk scoring and structured feedback turned out. The AI catches stuff I'd normally skip over. If you're curious or want to check it out, Give it a try : https://lnkd.in/dHmdmpGb If you want to contribute visit : https://lnkd.in/dHb_ghyd (don't forget to give it a star ⭐) #GitHub #CodeReview #AI #NextJS #TypeScript #DeveloperTools #OpenSource
More Relevant Posts
-
If you're still reading code the old way, I have news for you I've onboarded onto codebases with zero documentation more times than I'd like to admit. I grepped around, read the README (if it exists), and looked at past commits. That old approach is outdated. Here are 4 AI tools that help you understand codebases: 1️⃣ DeepWiki: Replace GitHub. com with deepwiki. com in any repo URL. You get auto-generated wiki docs, architecture diagrams, and an AI chat that actually knows the code. Built by the team behind Devin: deepwiki [dot] com 2️⃣ Code Wiki: Gemini-powered living documentation that regenerates after every commit. Every section hyperlinks to the actual code. The chat knows your entire repo end-to-end. Built by Google: codewiki [dot] google 3️⃣ GitSummarize: Turn any GitHub repo into a full documentation hub with summaries and high-level overviews. Built because the creators found it difficult to understand massive codebases when trying to contribute to open source. Free with rate limits: gitsummarize [dot] com 4️⃣ Code2Tutorial: Paste a GitHub URL, get a step-by-step tutorial walkthrough. Think of it as an AI-generated guided course for any repo: code2tutorial [dot] com Alternatively, just ask your favorite coding agent (e.g. Codex / Claude Code), it does a decent job of navigating your codebase. #frontenddeveloper #reactjsdeveloper #webdeveloper #developer #frontend #reactjs #nextjs #redux #ai -copied
To view or add a comment, sign in
-
-
Ever stared at a massive GitHub repository and wished you could just ask it how it works? I’ve been diving deep into local AI orchestration and wanted a better way to navigate new codebases. So, I built GitChat, A Retrieval-Augmented Generation (RAG) assistant that lets you chat directly with any public GitHub repository. It dynamically clones a repo, chunks the architecture, and streams context-aware answers back to a sleek, minimalist dark-mode interface. The best part? The LLM inference runs entirely locally. Features: - Dynamic Repo Ingestion: Automatically clones, filters out heavy/binary files, and chunks code for AI ingestion. - Zero-Hallucination Guardrails: Strict prompt engineering ensures the AI only answers using the provided codebase context. - Local LLM Orchestration: Runs models (like Qwen 2.5 Coder) locally, meaning zero API costs for inference. - Real-Time Streaming UI: Seamless, non-blocking message streaming with syntax-highlighted markdown rendering. - Dynamic Sandbox Controls: Real-time adjustments for LLM Temperature, K-value context retrieval, and model selection. The Tech Stack: - Frontend & API: Next.js (App Router), Tailwind CSS - AI Engine: Vercel AI SDK v6, LangChain - Database & Vector Search: MongoDB Atlas Vector Search - Local Models: Ollama (qwen2.5-coder, nomic-embed-text) I'm really proud of the resource management and the seamless integration between the backend data pipeline and the frontend UI. GitChat is fully open-source, and I am actively welcoming contributions! I would love for you to check out the architecture, test it out, or even open a PR if you have ideas for new features. Repository & Code: https://lnkd.in/dNuBfnzZ #Nextjs #LangChain #Ollama #MongoDB #SoftwareEngineering #AI #WebDevelopment
To view or add a comment, sign in
-
🤖 Built an AI-Powered Code Reviewer CLI using Node.js! Point it at any project folder and it automatically: ✅ Scans all your source files ✅ Actually runs your code to catch runtime errors ✅ Reads your git history for context ✅ Streams a full AI code review live in your terminal The interesting engineering part? It uses all 4 child_process methods in Node.js — each for a specific reason: → fork — offloads heavy file scanning to a separate Node.js worker → execFile — runs your code directly using the node binary (no shell) → exec — runs git commands to pull repo history → spawn — streams the Gemini AI response live to terminal No heavy frameworks. Just raw Node.js + Google Gemini API. 🔗 GitHub: https://lnkd.in/gUynjggP #nodejs #javascript #ai #buildinpublic #opensource #gemini
To view or add a comment, sign in
-
Running Claude Code at your terminal and running it inside a GitHub Actions runner look almost identical. They are not. At the terminal, if Claude goes into a retry loop, you notice it in seconds and kill it. In CI, nothing stops it except a timeout you probably did not set, and the first sign of trouble is the API bill at the end of the month. The defaults that make interactive use pleasant are the same defaults that make unattended use expensive. I put together a guide on Refactix covering the flags and patterns that actually hold up in a pipeline. The three flags that matter (-p, --output-format json, and --bare), a GitHub Actions workflow that fails gracefully, the JSON parser you need between Claude and your GitHub API, cost controls that keep review jobs bounded, and the security boundaries that stop a prompt injection in a PR diff from turning into a secrets leak. It also covers when to graduate from the CLI to the Claude Agent SDK or Dispatch for bigger workflows. Worth a read if you are thinking about wiring any AI agent into your CI, not just Claude Code. The shape of the problem generalizes. Full guide on Refactix: https://lnkd.in/grDAv5qA #ClaudeCode #CICD #AI #DevOps #SoftwareEngineering
To view or add a comment, sign in
-
-
Introducing my new project, "Diff Extractor," an AI-Driven Assistant that automates the process of analyzing code changes and generating a professional, conventional summary and commit message. 🤔 What is the problem that I'm trying to solve? When you make changes to your codebase, don't you need a clear summary of your changes? 🤨 Why not? With Diff Extractor, you can see what changes you have made, and if you want, you can also commit them. The Architecture 🏗️ 1️⃣ A lightweight terminal tool that interacts with the local Git environment. It extracts staged changes using "child_process" and securely communicates with the backend using the Node.js ecosystem. 2️⃣ A high-performance API that serves as the service layer. It handles complex data validation with Pydantic and manages the integration with Google Gemini 1.5 Flash. I implemented a specialized prompt engineering strategy that forces the LLM to return structured JSON, ensuring the backend can parse and deliver consistent commit messages and logic summaries using the Python ecosystem. This project is under development, but you can try it yourself by forking it into your GitHub account. You can find the full guide in the GitHub repos. 🟡 Client - https://lnkd.in/gYPqH7-Z 🟡 Backend - https://lnkd.in/g2fdSQ3C 🟡 API - https://lnkd.in/gbp8f93h 😄 Don't forget to add the API key in the .env file, and don't push the .env file into GitHub. #SoftwareEngineering #Python #FastAPI #NodeJS #GenerativeAI #GeminiAI #Git #FullStackDev #CleanCode
To view or add a comment, sign in
-
-
512,000 lines of TypeScript. One missing .npmignore. The fastest-growing GitHub repo in history. The Claude Code source code leaked on March 31st — and the reaction told us something more interesting than anything inside the code itself. Within hours, developers weren't just reading it. They were cloning it, forking it, rebranding it. A repo called "claw-code" hit 100K GitHub stars in a single day. Here's the take nobody's saying: The leak accidentally ran the most expensive open-source experiment in AI history — and the result was: the code isn't the moat. Thousands of engineers tore through 512K lines and found genuinely brilliant engineering. A three-layer memory architecture. A plugin-based tool system. An unreleased "KAIROS" daemon mode for always-on background agents. Fascinating stuff. But here's what they didn't find: the models. The training data. The RLHF. The alignment work. The thing that actually makes Claude useful isn't in any npm package. It never was. The companies racing to clone Claude Code from a leaked skeleton are learning the hard way what Anthropic already knew: the CLI is the last mile. The model is the product. Meanwhile, the real story — that a missing debug config file exposed 59MB of internal source in minutes — is a reminder that even the most sophisticated AI systems are still built by humans who forget to update .npmignore. Which part of this surprises you more: what they found, or how easy it was to leak? #AI #ClaudeCode #OpenSource #AIEngineering
To view or add a comment, sign in
-
-
Every time I opened a pull request, I felt it. That awkward wait for feedback. What if the first round of review didn't have to wait at all? That question became my MEng Capstone at the University of Cincinnati. My inspiration was simple: combine the AI with traditional web development to solve a real developer pain point. So I built an AI-Powered Pull Request Reviewer, a GitHub App that automatically analyzes code the moment a PR is opened and posts intelligent feedback directly in the thread. And I did it entirely with free resources, even for deployment: React frontend on GitHub Pages FastAPI backend on Render MongoDB free tier for storage 4 AI providers (Hugging Face, Groq, OpenRouter, SambaNova) via free inference APIs, and the architecture is flexible enough to plug in many more! Biggest lesson? Real engineering happens when you work within real-world constraints and still ship something end-to-end. 🔗 Frontend (Live): https://lnkd.in/gkGF-PWE 🔗 Frontend Code: https://lnkd.in/ggJyY-Bc 🔗 Backend (Live): https://lnkd.in/gYHUkmwX 🔗 Backend Code: https://lnkd.in/guRag2ht 🔗 GitHub App: https://lnkd.in/gVZFYcWN #AI #SoftwareEngineering #GitHub #CodeReview #FastAPI #React #CapstoneProject #UniversityOfCincinnati #BuildingInPublic #Opensource #CICD
To view or add a comment, sign in
-
Hey network! You might remember it (or not), but last weekend I posted about my personal projects and how I've made peace with generative AI; to the point that I've put my greatest efforts into automating my routines and making my agents my co-pilots (and even my full pit crew). Among the comments I received, some asked about the strategies I followed or my personal setup in general; while I could have shared this directly, I decided to take it a step or two further; "One command to scaffold them all, one command to brief them, one command to wire the agents and in the darkness free them." 🧙♂️ This is primer! It does in a few minutes what used to take me a full day of setup; everything AI agents need to actually be useful: the rules they read, the domain knowledge they reference, the git discipline they follow, and a tailored brief that tells them exactly where to start building. Althought it is still a WIP, this first version works fine with JS/TS stacks, but the idea is to provide a library big enough to work with the most popular architectures, always prioritizing industry (and personal) standards. For v2.0.0 I hope I can integrate the really big difference: - `primer plan`: describe a timeline and available hours, and primer generates a phased build plan with tasks ready to push to GitHub Projects. - Vue & Svelte skill packages: expanding the skill library beyond the current React/Node.js focus. What other ideas would you suggest? npm (@monomit/primer) - https://lnkd.in/eJrpwDbr primer monorepo - https://lnkd.in/eUKKJHxY #AI #DevTools #OpenSource #DeveloperExperience #Cursor
To view or add a comment, sign in
-
Every developer has faced this problem — building a project takes days or weeks, but writing a proper README takes hours. And often, that README decides whether your project gets noticed or ignored. To solve this, I built 𝗥𝗘𝗔𝗗𝗠𝗘 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗼𝗿 — a full-stack web application that automatically generates structured, professional README files using AI. The application connects directly with your GitHub account using OAuth, fetches your repositories, and allows you to select any project. Once selected, the backend analyzes the repository structure, code, and metadata, and generates a complete README within seconds. Key capabilities: • GitHub OAuth authentication (no manual setup required) • Repository search and filtering • AI-based README generation tailored to your project • Preview rendered output or view raw Markdown • Download, copy, or directly commit README to GitHub • Syntax-highlighted code sections and structured formatting Tech Stack: • Frontend: React.js • Backend: Node.js • Deployment: Vercel Frontend Repository: https://lnkd.in/dc8ex_f4 Backend Repository: https://lnkd.in/dZ2RfB7R This project focuses on reducing manual effort and improving project presentation, especially for developers who regularly build and publish repositories. The project is open source, and contributions are welcome. #WebDevelopment #OpenSource #GitHub #AI #FullStack #DeveloperTools
To view or add a comment, sign in
-
14 AI applications now use mcp-memory-service for persistent memory. The latest didn't come from me. The memory awareness hooks I built for Claude Code inject relevant memories at session start, compress them during context compaction apparently worked well enough as a pattern. @irizzant took that architecture and ported it to OpenCode. Same concept, different host, built entirely against the REST API. The plugin searches semantically by project name, deduplicates across queries, handles timeouts gracefully. Read-only by design. No Python imports, no protocol coupling. That's the real test for whether your abstractions are right: can someone who's never seen your codebase replicate the pattern for a different platform? Turns out the answer is yes because the HTTP API carries the same capabilities as the MCP tools. 😊 https://lnkd.in/ePYekaAF v10.36.0 #SemanticMemory #OpenSource
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development