Understanding a new codebase shouldn’t feel like guessing. You open a GitHub repo… 300+ files… and no idea where to start. That’s exactly what Codebase Navigator is trying to fix. https://lnkd.in/gMfQC3sc Instead of digging through files, you can: → paste a GitHub URL → ask questions in plain English → instantly see how everything connects What makes it interesting is how it works. You don’t just get answers. You get: → a live dependency graph (built from real imports) → a code viewer with relevant lines → a repo explorer → and an AI that explains everything All updating together. Even better, it can run locally using tools like Ollama. No API keys. No cloud dependency. This changes how developers explore code. Instead of asking: “Where is this file?” You ask: “How does authentication work?” And see the full flow instantly. It feels less like searching… and more like navigating. Would you use something like this for large codebases? #AI #Developers #GitHub #Programming #OpenSource #MachineLearning
Navigate Large Codebases with Codebase Navigator
More Relevant Posts
-
Developers: you may want to check your GitHub settings before April 24. GitHub is updating its policy so interactions with personal repositories may be used for AI model training. If you’re using personal repos and don’t want that data included, you’ll need to opt out manually. Copilot Business and Enterprise users are not affected. Official announcement: https://lnkd.in/eMXCDsuF To opt out: Profile → Settings → Copilot → Features → Privacy Are you opting out, or are you fine with your repos being used for training? On one hand, they are public. Any unscrupulous actor could already be using them. If you're already using GitHub Coding Agent, it may improve your experience. An option to differentiate training on public vs private repos might make the decision easier. The blog announcement linked above includes this statement under what will not be used for training irrespective of your choice: *Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.* Interaction data is defined as: "specifically inputs, outputs, code snippets, and associated context." That sounds like it includes commits. #github #ai #developer #dataprivacy #softwaredevelopment
To view or add a comment, sign in
-
-
Tired of digging through your own GitHub repos? 😅 Same here… so I built a GitHub Expert AI Agent with PowerShell + Azure AI Foundry. 🚀 This isn’t just prompting an LLM: 👉 It connects directly to your repositories 👉 Uses tools to actually read your code 👉 Answers straight from your projects (not the internet) Ask from your terminal: “Where is this function used?” “Which repo has X logic?” ⚡ Real answers. Real time. No guessing. This is where AI becomes a real dev tool. Check it out here: https://lnkd.in/eY5Yhct3 #MVPBuzz #AI #Foundry #GitHub #PowerShell
To view or add a comment, sign in
-
In Bart Pasmans his latest blog he shows you how to work with AI foundry and make connections to tools like GitHub (The options are endless!) Building an AI agent having knowledge on your own environment really brings the AI power to your business. Great place to get started with AI and AI Foundry. Check it out here; https://lnkd.in/eeV7FsCX
Microsoft MVP | MCT | Founder 7NodeIT | AI Engineering | Senior Solution Architect | DevOps Engineer | Technical Blogger
Tired of digging through your own GitHub repos? 😅 Same here… so I built a GitHub Expert AI Agent with PowerShell + Azure AI Foundry. 🚀 This isn’t just prompting an LLM: 👉 It connects directly to your repositories 👉 Uses tools to actually read your code 👉 Answers straight from your projects (not the internet) Ask from your terminal: “Where is this function used?” “Which repo has X logic?” ⚡ Real answers. Real time. No guessing. This is where AI becomes a real dev tool. Check it out here: https://lnkd.in/eY5Yhct3 #MVPBuzz #AI #Foundry #GitHub #PowerShell
To view or add a comment, sign in
-
The era of "all you can consume" AI for developers is officially ending. Woke up to the news yesterday that GitHub Copilot starting June 1, 2026... is moving to usage-based billing. While Claude Code, Cursor and other tools have also followed. It's a fundamental shift in how we build with agents. I posed about this last year that the subsidization of LLM costs was not going to last too long. Here we are now, the compute demands have become unsustainable. A single agentic loop can burn more tokens than a developer used in an entire month under the old flat-rate model. For copilot this is what it will look like from June: - "Unlimited" is replaced by credits: Your $10/mo plan now gives you exactly $10 in "GitHub AI Credits." (Personal observation, I consume $10 easily in a 6-8 hours of use with Sonnet on Copilot) - Token-based billing: You’re paying for every input, output, and cached token you consume. - Code reviews will take from that budget and will also consume github runner minutes. Double whammy there. Why does this matter? Because it forces a move toward what I call "Efficient Agency." The old model, a good agent was one that eventually found the answer, regardless of how many tokens it burned. The new eval benchmark for the future will be solving the problem with the absolute minimum number of tokens. However I dont think this is a bad thing. This shift will finally flush out the "wasteful" agents that just loop until they hit a context limit. It's going to reward engineering craftsmanship over "vibe coding" loops. P.S. At Optimal AI, we’ve been obsessing over this for a while. We use smart model routing and multi-model techniques to keep quality high while keeping costs drastically lower. This is how we can continue to provide unlimited-style value in a usage-based world. #GitHubCopilot #AIEfficiency #EngineeringLeadership #LLMOps #OptimalAI
To view or add a comment, sign in
-
-
GitHub Copilot is moving to Usage-Based Billing GitHub just announced that starting June 1 2026, Copilot will transition to a usage-based model powered by GitHub AI Credits. A few important details: ✦ Credits over Requests: Subscriptions now include a monthly credit allotment. Usage is calculated via tokens (Input/Output/Cached), similar to standard LLM APIs. ✦ Core features remain included: Standard code completions and “Next Edit” suggestions will not consume credits. ✦ Pooled Usage for Teams: Organizations can now pool credits across seats to eliminate wasted capacity and set granular budget caps. Why it matters: Base prices aren't changing, but the ceiling is lifting. This move enables more heavy-duty, agentic workflows while giving engineering leaders better transparency into their actual AI ROI. it’s time to start looking at those usage dashboards! 🙂 Full details here: https://lnkd.in/dUa-8hDU #GitHub #Copilot #GenAI #SoftwareEngineering #AI #DevOps
To view or add a comment, sign in
-
-
🚀 Just built an end-to-end CI/CD pipeline for a Machine Learning project using AWS & GitHub Actions! From code to deployment, the entire workflow is automated — ensuring faster, reliable, and scalable delivery. 🔧 Tech Stack: - AWS (EC2, ECR, IAM) - GitHub Actions - Docker - Python (ML Pipeline) 💡 Key Learnings: - Automating ML deployment pipelines - Setting up self-hosted runners on EC2 - Managing secrets & secure deployments - Containerizing applications with Docker This project helped me bridge the gap between Data Science and DevOps (MLOps) — turning models into production-ready systems. 🔗 Check it out: https://lnkd.in/gTSysRSt #AWS #DevOps #MLOps #MachineLearning #GitHubActions #Docker #DataScience #CI_CD
To view or add a comment, sign in
-
GitHub reportedly crossed 630 million total repositories in 2025 — adding 121 million new ones in a single year, or more than 230 every minute. According to GitHub's Octoverse 2025 report, developers pushed nearly 1 billion commits (+25% YoY) and merged 43.2 million pull requests per month on average. A new developer joined the platform every second — ~36 million in 2025 alone, pushing the total past 180 million. AI repositories now top 4.3 million, with LLM-focused projects up 178% year-over-year. This isn't organic growth — it's AI collapsing the cost of shipping code. Copilot's free tier dropped late 2024; 80% of new devs now use it in their first week AI-assisted code accounts for an estimated 29–42% of all commits in 2025 TypeScript surged to #1 language, partly because strong typing reduces LLM hallucinations India alone added 5.2 million developers — AI lowered the entry barrier everywhere The nuance often lost in viral screenshots: most of these 121 million repos aren't meaningful projects. Many are short-lived experiments, clones, or AI-generated boilerplate. Open source maintainers are now describing a new burden — reviewing AI "slop" PRs that take longer to reject than human contributions ever did. The flood is real. The signal-to-noise ratio is the actual story. For engineering leads and builders right now: The velocity advantage is real — prototype faster, but build governance around what gets merged Expect a wave of quality-gating tooling in 2026; position early before your PR queue becomes unmanageable Is the GitHub explosion a sign of AI democratizing software creation — or a ticking maintenance bomb for open source? What are you seeing in your own repos — more signal, or more noise? 👇 #GitHub #OpenSource #AICoding #DeveloperTools #AgenticAI #LLM #AIBenchmarks #SoftwareEngineering
To view or add a comment, sign in
-
Most CI pipelines still do something expensive… even when they don’t need to. They clone your entire repo just to analyze a Pull Request. That always felt wrong to me. So I built something different: RepoPulse AI A Zero-Clone PR analysis engine. Instead of downloading your codebase, it reads your repository directly through GitHub’s API layer. Here’s what happens under the hood: ⚡ It triggers instantly on PR events (GitHub webhooks + Probot) ⚡ It analyzes repo structure via GitHub GraphQL (no git clone at all) ⚡ It runs parallel intelligence checks: PR health signals Dependency risk across ecosystems Code ownership (“Bus Factor”) Merge behavior patterns ⚡ It falls back gracefully to REST + cached intelligence when needed ⚡ It outputs a 0–100 PR Health Score directly inside the pull request No waiting. No cloning. No pipeline slowdown. Just instant architectural feedback before merge. I’m curious: Would you trust an AI to review your PR before it hits CI? 🔗 GitHub: https://lnkd.in/gyEh3tDi #GitHub #DevOps #SoftwareEngineering #AI #CodeQuality #BuildInPublic
To view or add a comment, sign in
-
I started this project thinking it would take a weekend. It took longer. But what I built is something I’m genuinely proud of. 🚀 A few weeks ago I asked myself a simple question: “What if any AI assistant could use DevOps tools without writing custom integrations for every single model?” That question turned into DevOps MCP Hub a production-inspired MCP server that connects GitHub, Kubernetes, Prometheus, and Groq AI into a single, standardized interface for any AI assistant. Here’s what it does 👇 🔧 7 plug-and-play tools: fetch GitHub issues, create draft PRs, inspect pod logs, query metrics, and run LLM-powered incident analysis 🤖 Groq AI integration: sends real pod logs and metrics to Llama 3 and gets back a structured root cause analysis with severity, affected components, and immediate actions 🗄️ SQLite audit log: every single tool call is recorded with arguments, result, duration, and timestamp 🔄 Exponential backoff retries handles GitHub rate limits and Groq API timeouts gracefully ⚙️ Pydantic config validation: fails fast at startup with clear errors, not mid-request The moment everything clicked was when I ran the test client and watched it: 1. Detect elevated 5xx error rates in Prometheus 1. Pull Kubernetes pod logs showing a CrashLoopBackOff 1. Fetch the related GitHub issue with a real API call 1. Generate an AI incident report all in one flow 🔥 No Docker. No Kubernetes cluster. Just Python running on my laptop. The best part? Connect it to Claude Desktop, Cursor, or VS Code and your AI assistant gets all 7 tools automatically no model-specific code, no re-integration, just MCP. GitHub repo → https://lnkd.in/gZPE3JJp ⬇️ #MCP #DevOps #Python #AI #GitHub #LLM #Groq #BuildInPublic #SoftwareEngineering #AIEngineering
To view or add a comment, sign in
-
GitHub just turned agent skills into npm — one `gh skill install` now runs on 6 different AI coding agents. Shipped April 16 in CLI v2.90.0. Copilot, Claude Code, Cursor, Codex, Gemini CLI, Antigravity. Same command. Same skill. Zero vendor lock. Spent the weekend rewiring my workflow around it. The skill I built for Claude Code last month installed into Cursor in 12 seconds. No rewrite. No adapter. Just a commit SHA and a `gh skill add`. VoltAgent's awesome-agent-skills repo already curates 1,000+ skills. K-Dense-AI dropped a full science/finance/research pack the same week it launched. The ecosystem moved faster than the announcement. Here's what most engineers are missing: The agent isn't the moat anymore. The skill library is. Whoever owns the best skills wins — not whoever owns the best model. Copilot, Claude, Cursor all become interchangeable shells the moment skills go portable. But read this twice before you install anything: GitHub does zero verification. No signatures. No review. No sandbox by default. Your AI is now executing arbitrary instructions from random GitHub repos. A skill can contain prompt injections, hidden system prompts, or shell commands. `gh skill preview` is your only line of defense — and almost nobody will run it. We just recreated the npm supply chain problem for AI agents, except this time the malicious payload tells your model what to think. Pin to commit SHAs. Preview before install. Treat every skill like untrusted code — because it is. The agent wars just ended. The skill wars just started. #AI #GitHub #DevTools #AIAgents #SupplyChainSecurity
To view or add a comment, sign in
Explore related topics
- How Developers can Use AI in the Terminal
- How to Navigate Complex Codebases
- How to Use AI to Make Software Development Accessible
- How to Stay Proficient in Complex Codebases
- How to Use AI Instead of Traditional Coding Skills
- How to Use AI for Manual Coding Tasks
- How to Overcome AI-Driven Coding Challenges
- How to Support Developers With AI
- How to Navigate AI Resources and Tools
- How AI Assists in Debugging Code
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development