GitHub reportedly crossed 630 million total repositories in 2025 — adding 121 million new ones in a single year, or more than 230 every minute. According to GitHub's Octoverse 2025 report, developers pushed nearly 1 billion commits (+25% YoY) and merged 43.2 million pull requests per month on average. A new developer joined the platform every second — ~36 million in 2025 alone, pushing the total past 180 million. AI repositories now top 4.3 million, with LLM-focused projects up 178% year-over-year. This isn't organic growth — it's AI collapsing the cost of shipping code. Copilot's free tier dropped late 2024; 80% of new devs now use it in their first week AI-assisted code accounts for an estimated 29–42% of all commits in 2025 TypeScript surged to #1 language, partly because strong typing reduces LLM hallucinations India alone added 5.2 million developers — AI lowered the entry barrier everywhere The nuance often lost in viral screenshots: most of these 121 million repos aren't meaningful projects. Many are short-lived experiments, clones, or AI-generated boilerplate. Open source maintainers are now describing a new burden — reviewing AI "slop" PRs that take longer to reject than human contributions ever did. The flood is real. The signal-to-noise ratio is the actual story. For engineering leads and builders right now: The velocity advantage is real — prototype faster, but build governance around what gets merged Expect a wave of quality-gating tooling in 2026; position early before your PR queue becomes unmanageable Is the GitHub explosion a sign of AI democratizing software creation — or a ticking maintenance bomb for open source? What are you seeing in your own repos — more signal, or more noise? 👇 #GitHub #OpenSource #AICoding #DeveloperTools #AgenticAI #LLM #AIBenchmarks #SoftwareEngineering
GitHub Adds 121M Repos in 2025, AI Code Accounts for 29-42% of Commits
More Relevant Posts
-
The era of "all you can consume" AI for developers is officially ending. Woke up to the news yesterday that GitHub Copilot starting June 1, 2026... is moving to usage-based billing. While Claude Code, Cursor and other tools have also followed. It's a fundamental shift in how we build with agents. I posed about this last year that the subsidization of LLM costs was not going to last too long. Here we are now, the compute demands have become unsustainable. A single agentic loop can burn more tokens than a developer used in an entire month under the old flat-rate model. For copilot this is what it will look like from June: - "Unlimited" is replaced by credits: Your $10/mo plan now gives you exactly $10 in "GitHub AI Credits." (Personal observation, I consume $10 easily in a 6-8 hours of use with Sonnet on Copilot) - Token-based billing: You’re paying for every input, output, and cached token you consume. - Code reviews will take from that budget and will also consume github runner minutes. Double whammy there. Why does this matter? Because it forces a move toward what I call "Efficient Agency." The old model, a good agent was one that eventually found the answer, regardless of how many tokens it burned. The new eval benchmark for the future will be solving the problem with the absolute minimum number of tokens. However I dont think this is a bad thing. This shift will finally flush out the "wasteful" agents that just loop until they hit a context limit. It's going to reward engineering craftsmanship over "vibe coding" loops. P.S. At Optimal AI, we’ve been obsessing over this for a while. We use smart model routing and multi-model techniques to keep quality high while keeping costs drastically lower. This is how we can continue to provide unlimited-style value in a usage-based world. #GitHubCopilot #AIEfficiency #EngineeringLeadership #LLMOps #OptimalAI
To view or add a comment, sign in
-
-
GitHub just turned agent skills into npm — one `gh skill install` now runs on 6 different AI coding agents. Shipped April 16 in CLI v2.90.0. Copilot, Claude Code, Cursor, Codex, Gemini CLI, Antigravity. Same command. Same skill. Zero vendor lock. Spent the weekend rewiring my workflow around it. The skill I built for Claude Code last month installed into Cursor in 12 seconds. No rewrite. No adapter. Just a commit SHA and a `gh skill add`. VoltAgent's awesome-agent-skills repo already curates 1,000+ skills. K-Dense-AI dropped a full science/finance/research pack the same week it launched. The ecosystem moved faster than the announcement. Here's what most engineers are missing: The agent isn't the moat anymore. The skill library is. Whoever owns the best skills wins — not whoever owns the best model. Copilot, Claude, Cursor all become interchangeable shells the moment skills go portable. But read this twice before you install anything: GitHub does zero verification. No signatures. No review. No sandbox by default. Your AI is now executing arbitrary instructions from random GitHub repos. A skill can contain prompt injections, hidden system prompts, or shell commands. `gh skill preview` is your only line of defense — and almost nobody will run it. We just recreated the npm supply chain problem for AI agents, except this time the malicious payload tells your model what to think. Pin to commit SHAs. Preview before install. Treat every skill like untrusted code — because it is. The agent wars just ended. The skill wars just started. #AI #GitHub #DevTools #AIAgents #SupplyChainSecurity
To view or add a comment, sign in
-
GitHub just reported that 51% of all code committed to its platform in early 2026 was generated or substantially assisted by AI. Think about that for a moment. A majority of commits to the world’s largest code host now have an AI somewhere in the loop. And we’re only 3 years out from GitHub Copilot’s general availability. The supporting data tells the same story: → McKinsey: AI coding tools cut routine coding time by 46% (4,500+ developers, 150 enterprises) → Stack Overflow: 84% of developers are using or planning to adopt AI coding tools → 20M+ GitHub Copilot users, with agent mode now standard But here’s what the headline misses: the developers seeing the biggest gains aren’t the ones who replaced their workflow with AI. They’re the ones who redesigned their workflow around AI. The 2026 developer stack isn’t one AI tool. It’s a combination: • Claude Code or Cursor for complex reasoning and multi-file edits • Copilot for line-level autocomplete • Local models (Ollama, Tabby) for sensitive or proprietary code The developers who treat AI as a single tool will plateau. The ones treating it as a new layer in their stack are the ones compounding. 51% of code is already there. The other 49% won’t wait long. #DeveloperProductivity #AITools #SoftwareEngineering #GitHub #FutureOfWork
To view or add a comment, sign in
-
Recently explored an interesting open-source project — git-lrc 🚀 AI agents can write code quickly, but they can also silently introduce bugs, remove logic, or change behavior — often discovered later in production. git-lrc tackles this by running AI-powered code reviews on every commit, helping developers catch issues early and improve code quality. What stood out to me is how seamlessly it integrates with the Git workflow, making it simple and practical for real-world use. Great work by @Shrijith Venkatramana and the team 👏 Looking forward to exploring it more. https://lnkd.in/gpqBD-FF #AI #OpenSource #GitHub #CodeReview #Developers
To view or add a comment, sign in
-
vibexplain just got a big update based on early feedback from those who tried (big thanks!). (Link here: https://lnkd.in/gMDutaVU) The original version only worked by intercepting standard output (stdout), which didn't work effectively with tools like Kiro or Cursor that run commands internally or perform tool use. Here's what changed. 🔍 File watcher: vibexplain now watches your project directory for real changes. New files, dependency installs, Terraform resources, git commits. It detects all of it regardless of which agent you use. Kiro, Cursor, Windsurf, they all just work now. ⚡ Claude Code and Codex integration: vibexplain auto-detects and tails the JSON session transcripts that Claude Code or Codex already writes. Every Bash command shows up on the dashboard. Zero config. 🏗️ Smarter project scanning. The scanner now extracts real resource names from Terraform, Serverless, and CloudFormation files. We also have session history to effectively track where you were and how you progressed. Works for new projects (dashboard builds from zero) and existing ones as well. Still zero config. Still one dependency. #codex #claudecode #github #vibecoding #ai
When you vibe code within your favorite IDEs (Kiro -i n my case) be it VS Code, Claude Code, Cursor etc., the chat agent runs dozens of CLI commands your behalf. But if you are not a seasoned developer, do you really know and can keep track of what it's building? "vibexplain" is your curious buddy that sits between the agent and what it builds for you: 🧠 A live mind map of every command, grouped by category and context aware 🏗️ A real-time architecture diagram that draws itself as features and services are built, like a draw.io diagram. 📊 A Knowledge Graph that reveals the relationships between all entities: files, packages, services, configs, and endpoints. 📖 More than anything. it provides pain-english explanations of what each command does and why GitHub: https://lnkd.in/gMDutaVU. Would love your feedback and please star it if you liked it. Works with any AI coding tool. Zero config. One dependency. Just wrap your agent and watch.: vibexplain -- kiro-cli chat vibexplain -- claude "build me an API" vibexplain -- cursor-cli "add auth" Try it: npx vibexplain --demo Developed using Kiro CLI! #VibeCoding #AI #DevTools #OpenSource #AWS #BuildInPublic #kiro #claudecode #github #kirocli #knowledgegraph #claude
To view or add a comment, sign in
-
GitHub Copilot is shifting its billing model from a flat monthly fee per user to a usage-based, token-centric system, a move that could significantly alter operational costs for developers and organizations. This transition signals a broader trend within the AI-as-a-service landscape towards granular billing, where users pay directly for the computational resources and AI processing power consumed. It necessitates a new level of cost management and predictability analysis for teams leveraging such advanced coding tools. For heavy users, this could potentially lead to higher costs than the previous flat-rate subscription, while lighter users might see reduced expenses. The shift puts the onus on developers to monitor their AI assistance usage more closely, making 'data science' a crucial component for optimizing expenditure on these essential tools. Understanding the token consumption patterns will be key. This move prompts organizations to reassess how they integrate AI code generation into their workflows, potentially driving demand for better analytics and forecasting tools for AI service usage. This strategic change by GitHub sets a precedent that other AI development tools and platforms might follow, indicating a maturation of the market where providers seek to align billing more closely with actual value delivered through resource utilization. #NCNNews2026 #NameCoinNews #AICoding #DeveloperTools #GitHubCopilot #UsageBasedBilling #SoftwareDevelopment #TechFinance #AIStrategy
To view or add a comment, sign in
-
-
GitHub Copilot went from: "We can't take new users." To: "Pay per use." That's not a pricing update. That's a signal. When a product is capacity-constrained, it means demand outran infrastructure. That's a good problem. But it also means the old pricing model, flat subscription, unlimited usage, stopped making sense. Because some users were using a little. And some were using everything. Usage-based billing fixes that. The heavy users pay more. The light users pay less. The economics align with the value actually being delivered. But here's the more interesting implication. When AI coding tools move to usage-based pricing, the conversation inside every engineering org shifts. It's no longer "do we have Copilot?" It's "how much are we actually using it — and is the output worth what we're paying?" That's a harder question. And a healthier one. The teams that use it constantly and ship faster will justify the cost easily. The teams that had it running in the background, barely touched, on a flat subscription? Now they have to reckon with whether AI actually changed how they work. Or just felt like it did. Usage-based pricing doesn't just change what you pay. It forces honesty about what you got. #GitHub #Copilot #AI #Engineering #FutureOfWork
To view or add a comment, sign in
-
-
I just built an AI-powered code review bot from scratch — and it works autonomously on real GitHub Pull Requests. 🤖 Here's what it does: Every time a PR is opened, the bot automatically triggers, reads the code changes, and posts a structured review as a comment — covering bugs, security vulnerabilities, code quality issues, and performance improvements. No human needed. Zero manual work. Here's how it works under the hood: GitHub Actions listens for every new PR The diff is fetched using the GitHub API The diff is sent to LLaMA 3.3 70B (via Groq API) for review The AI's feedback is posted back as a PR comment automatically The whole pipeline runs on GitHub's servers — fully autonomous. What I learned building this: → How to integrate LLMs into real developer workflows, not just chatbots → GitHub Actions and CI/CD pipeline setup from scratch → Working with REST APIs (GitHub + Groq) end to end → How to handle auth, secrets, and permissions in production environments → Debugging live systems — nothing teaches you faster than a 403 error at midnight 😅 This is the kind of tool that saves hours every week for dev teams — and I built it in Python over a weekend. If you're a startup struggling with slow code reviews or want to see this in action — let's talk. 🔗 GitHub: https://lnkd.in/e9T84Xn3 🌐 Portfolio: aaradhya1807.github.io #Python #AI #GitHub #Automation #OpenToWork #MachineLearning #DevTools #LLM
To view or add a comment, sign in
-
-
Agentic workflows and parallelised reasoning sessions are demanding so much processing power that GitHub is restricting Copilot Individual plans. New sign-ups are paused, and strict token-based usage caps are being enforced directly inside VS Code and the CLI. Will your engineering team need to adjust its CI/CD pipelines and daily coding habits? https://lnkd.in/eCUQiAeY #github #copilot #agenticai #developers #ai #softwaredevelopment #technology
To view or add a comment, sign in
-
When I said we lived in an AI 𝗯𝘂𝗯𝗯𝗹𝗲, nobody believed me. The 𝗱𝗲𝗺𝗼𝗰𝗿𝗮𝘁𝗶𝗰 manifesto was saying: “everybody can code, everybody should code.” Yesterday, GitHub paused new sign-ups for GitHub Copilot Pro, Pro+, and Student plans. Also planned to increase the fee on AI consumption. We’re using resources out of a marketing program and an 𝗶𝗱𝗲𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 one. It’s a subsidized marketing with 𝗰𝗵𝗲𝗮𝗽 tokens to 𝗵𝗼𝗼𝗸 users moving to a sustainable infrastructure or real-world 𝗽𝗿𝗶𝗰𝗶𝗻𝗴. The only way to survive is using local models as a "bridge" or a "pre-processor" for giants like Gemini and Claude, and the only 𝗹𝗼𝗴𝗶𝗰𝗮𝗹 move to avoid going 𝗯𝗿𝗼𝗸𝗲 while staying productive. For instance, models like Context7, an open-source Model Context Protocol (MCP) server developed by Upstash, provide AI coding 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀 with real-time, up-to-date documentation for programming libraries and frameworks. It addresses a critical problem: AI models often have outdated knowledge about software libraries or hallucinate deprecated APIs, leading to incorrect code suggestions. See how to wire into 𝗚𝗶𝘁𝗛𝘂𝗯 𝗖𝗼𝗽𝗶𝗹𝗼𝘁 𝗖𝗟I, with the container booting autonomously every time the terminal opens. https://lnkd.in/dEGvAauq #AICoding #MCP #GitHubCopilot #Upstash #Context7 #LocalAI #DevTools #OpenSource #SoftwareArchitecture #CodingBubble #LLM #TechStrategy
To view or add a comment, sign in
-
More from this author
Explore related topics
- AI Coding Tools and Their Impact on Developers
- How AI Agents Are Changing Software Development
- Building AI Applications with Open Source LLM Models
- How Open Source Influences AI Development
- Trends in Open-Source Language Models
- How to Boost Developer Efficiency with AI Tools
- 2025 LLM Bias Research Study Findings
- How AI Coding Tools Drive Rapid Adoption
- Impact of Github Copilot on Project Delivery
- How AI Improves Code Quality Assurance
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development