Got this email from GitHub yesterday: Copilot is moving to usage-based billing. Annual plans are being retired. Starting June 1, you pay based on token consumption — input, output, and cached tokens — with multipliers per model. This is the moment every AI-powered product will eventually face. And it has implications far beyond Copilot. If your product is a thin wrapper around a third-party AI API, your margins are not yours to control. The API provider can change pricing at any time, and your business model shifts overnight. This is exactly what's happening here — GitHub is passing through the real cost of LLM inference to users. A few things this should make every engineering leader think about: - Tokens cost money. Someone is writing the check. If your team adopted AI tools in the excitement phase without modeling the cost, the budget conversation is coming — and it won't be comfortable. - Usage-based billing means unpredictable costs. A developer who uses Copilot heavily could cost 3-5x more per month than one who doesn't. Finance teams aren't ready for this variance. - If you're a startup building on top of OpenAI, Anthropic, or any LLM API — your cost structure is a function of someone else's pricing decisions. That's not a moat. That's a dependency. - The "AI saves developer time" ROI story now has a denominator. Time saved vs. tokens consumed. Someone will have to prove the math works. This isn't a Copilot problem. It's an industry inflection point. The free lunch is ending. The question is: who in your organization is tracking this, and what's the plan when the bill arrives? #ai #github #copilot #softwareengineering #startup #enterprisearchitecture #costoptimization
GitHub Copilot Shifts to Usage-Based Billing
More Relevant Posts
-
𝗙𝗿𝗼𝗺 𝗘𝘅𝗽𝗲𝗿𝗶𝗺𝗲𝗻𝘁𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗔𝗴𝗲𝗻𝘁𝘀 𝘁𝗼 𝗪𝗮𝘁𝗰𝗵𝗶𝗻𝗴 𝘁𝗵𝗲 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗠𝗼𝗱𝗲𝗹 𝗕𝗿𝗲𝗮𝗸 🧪💸 A few days ago, I was posting about the massive potential of autonomous agentic workflows. This week, GitHub confirmed what I'd started to suspect from the inside. As of April 20, 2026, GitHub has paused new sign-ups for Copilot Pro, Pro+, and Student plans. But the reason isn't what most people assume — it's not a capacity crunch. It's a math problem. GitHub's VP of Product said it directly: agentic workflows are now routinely consuming more compute than users pay for in a month. A handful of requests can cost more than the entire plan price. 𝗪𝗵𝗮𝘁 𝗜'𝘃𝗲 𝘀𝗲𝗲𝗻 𝗳𝗿𝗼𝗺 𝗺𝘆 𝗼𝘄𝗻 𝘁𝗲𝘀𝘁𝗶𝗻𝗴: Moving from "asking a question" to "running an agent" isn't a linear jump in compute — it's a chain reaction. Tokens multiply fast. The flat-rate subscription was always a bet that most users wouldn't push hard. Agentic workflows broke that assumption. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝘀𝗶𝗴𝗻𝗮𝗹 𝗵𝗲𝗿𝗲 𝗶𝘀𝗻'𝘁 𝘁𝗵𝗲 𝗽𝗮𝘂𝘀𝗲 — 𝗶𝘁'𝘀 𝘄𝗵𝗮𝘁 𝗰𝗼𝗺𝗲𝘀 𝗻𝗲𝘅𝘁. GitHub is reportedly moving toward token-based billing. The "all-you-can-eat" model isn't being tweaked; it's being retired. And when that happens, the developers who've thought about their agent's cost profile — not just its output — will have a serious advantage over those who haven't. This, to me, is the start of Metered AI era. Compute efficiency is about to matter as much as capability. Have you noticed your premium requests disappearing faster than expected? Are you ready to start optimizing for cost, not just speed? #AI #AgenticAI #GitHubCopilot #LLMs #SoftwareEngineering #TechTrends2026 #ComputeEfficiency
To view or add a comment, sign in
-
-
GitHub just paused new signups for Copilot Pro and Pro+ — and the reason is more interesting than it sounds. 🤖 Agentic AI broke the pricing model. When GitHub first designed Copilot's usage limits, the assumption was that developers would use it for autocomplete and chat. Occasional, bounded interactions. That world doesn't exist anymore. Today, agents run long-horizon tasks — spinning up parallel sessions, executing multi-step workflows, reviewing entire codebases. Some single requests now cost more than the entire monthly plan price. So GitHub hit the brakes. New sign-ups are paused. Usage caps have tightened. Session limits now sit alongside weekly token limits. And Opus models have been quietly removed from Pro plans — they're still in Pro+, which now offers more than 5x the limits of Pro. This isn't a failure — it's a reckoning the whole industry is heading toward. The economics of agentic AI are fundamentally different from conversational AI. When a tool stops waiting for human prompts and starts doing real work autonomously, consumption patterns change completely. Pricing models built for chat don't survive contact with agents. If you're building AI strategy for your organisation — or advising on AI tooling — this is worth watching closely. Vendors are still figuring out how to price agentic workloads. That means pricing changes are coming across the board, not just at GitHub. The question isn't whether your team's AI usage will grow. It's whether your budget and vendor agreements are ready for what that growth actually looks like. How are you thinking about AI cost governance as agentic tools become standard in your teams? Read the full announcement here: https://lnkd.in/g7TfhXHx #AI #AgenticAI #GitHubCopilot #AIStrategy #EnterpriseAI #Leadership #TechLeadership #AIGovernance #DataPlatform #CostManagement
To view or add a comment, sign in
-
-
The AI Honeymoon is OVER! GitHub Copilot and the rapid acceleration of AI "Enshittification". We all knew the golden era of highly subsidized, flat-fee AI access couldn't last. The compute costs driving today's LLMs are simply too immense. But the transition to a rigid, pay-for-what-you-use reality is accelerating at breakneck speed. Case in point: GitHub Copilot is gutting the value of their subscription plans. Here is the reality check for the software engineering world: The Death of the "Premium Request" Historically, for $10/month, users got 300 "premium requests" regardless of token weight. Starting June 1st, that's gone. GitHub is shifting entirely to a token-based credit system. The flat-fee safety net is vanishing. The Illusion of the Subscription Plan Retaining a Copilot plan now just acts as prepaid credits. The math is staggering: Massive Jumps: Top-tier models like Claude Opus are jumping from a 3x multiplier to an astonishing 27x. That is a 900% increase in cost per prompt! Zero Prepaid Benefit: Copilot's unit price for tokens perfectly mirrors direct API costs. Paywalls: Previously free models are removed, and features like code review using GitHub actions are now metered. The Hard Reality for AI and Engineering Stepping into this space as an AI Specialist, it is clear we are moving from an era of subsidized experimentation into a phase of rigorous ROI justification. Teams building internal tooling using direct API access will have a massive cost advantage over those relying on SaaS wrappers. The bottom line? Cut out the middleman. Take control of your own token usage and build workflows directly. The era of cheap AI is closing. Start budgeting your tokens accordingly. GitHubMicrosoft #AI #SoftwareEngineering #GitHubCopilot #LLMs #TechTrends #DeveloperTools #TechNews #Coding #Technology #ArtificialIntelligence #SoftwareEngineering #GitHubCopilot #TechTrends
To view or add a comment, sign in
-
I’ve been managing the cost of AI Coding Agents for my engineering teams for a while now, and I don't think the usage-based model is sustainable for a long term and that it might be the beginning of the end. I understand that compute costs are exploding and a move like this one from GitHub is expected but Software Engineers will start feeling the pressure now from #Claude and #GithubCopilot like never before. The shift I see coming: 🔹Local is the new king: Engineers are going to prioritize local setups (Ollama, etc.) to handle the work without the meter running. 🔹Org-Managed Solutions: Companies will look for self-hosted or private cloud solutions that can compete on performance without the unpredictable "token usage tax." 🔹Efficiency as a Skill: We need to go back to the basics and optimize for prompt efficiency and token awareness. The goal was always to stay "hands-on" with the tech, but now we have to stay just as hands-on with the infrastructure costs. Read the Github Copilot announcement in the attached link. #GenerativeAI #SoftwareEngineering #GitHubCopilot #EngineeringLeadership #AIInfrastructure MLOps community AI Accelerator Institute AI Realized AI Makerspace
To view or add a comment, sign in
-
Is this the beginning of the end for the AI honeymoon phase? GitHub just announced that Copilot is moving to usage-based billing. They are removing subscriptions entirely. This means no more flat monthly fees and no more "all you can eat" code generation. As a Senior Engineer, I see this as a massive reality check for our industry. For the last two years, we have been operating in a bit of a bubble. The cost of compute was largely subsidized by VC burn and Big Tech market share wars. That bubble just popped. When tools move to usage-based models, two things happen immediately. First, CFOs start asking for ROI audits on every single seat. Second, developers start hesitating before they hit "Tab" to generate boilerplate. If you have to pay per token, hallucinations aren't just a nuance. They are a literal line item on the budget. Is AI still a productivity multiplier if the bill scales as fast as the output? The era of "AI at any cost" is over. Now we finally get to find out what this technology is actually worth when it has to stand on its own financial feet. #SoftwareEngineering #GitHubCopilot #AI #TechTrends #CloudComputing
To view or add a comment, sign in
-
Github just announced changes in pricing for their Copilot model: users will be moved from a monthly allocation to per-usage pricing, and token costs for 'standard' models will explode (1x to 9x for Sonnet 4.6, 7.5x to 27x for Opus 4.7). You'll get $39 of tokens per month which a medium-to-heavy user can probably burn through in a single day; after that, you pay. No more 0x models as fallback, either. I think that's a sign of things to come: AI (LLM) usage has been enormously cheap while Anthropic and OpenAI lock in their customers. Your cheap AI chat has been subsidised by billions of investor money. It looks like they want to see some return on their investment now: expect AI usage to become far costlier. Good news for the people concerned about the future of programming as a career - your salary costs will soon be lower than the LLM costs! https://lnkd.in/gMi6pkGZ
To view or add a comment, sign in
-
Starting April 24, 2026, GitHub is making a significant shift in how GitHub Copilot handles user data — and honestly, this is something every developer, startup, and CTO should pay attention to. 🔍 What’s Changing? 👉 Default Opt-In for AI Training Your prompts, chat interactions, and even code snippets will now be used to train AI models — by default. 👉 Opt-Out (But You Must Act) You can disable this in settings — but if you don’t, your data becomes part of the training pipeline. 👉 Enterprise Users Are Protected Copilot Business & Enterprise users are excluded due to stricter data agreements. 👉 Private Repo ≠ Fully Private Your code at rest is safe. But the moment you interact with Copilot, that context can be used for training. 💡 Why This Matters (Beyond Just GitHub) This is not just a policy update — it's a signal of where AI is heading: 1️⃣ The New Default is “Data Contribution” AI companies are shifting from: “We train models for you” to “We train models with your data” If you’re not actively managing settings, you’re contributing. 2️⃣ Developers Are Now Data Providers Every prompt you write becomes: Training data Behavioral signal Product improvement input This blurs the line between user and contributor. 3️⃣ Privacy is Now a Product Decision For startups and enterprises, this raises critical questions: Should teams use AI tools on sensitive code? Do you enforce opt-out policies org-wide? Are you choosing tools based on data governance, not just features? 🧠 My Take (AI-First Perspective) At Sunfocus Solutions, we see this as a turning point. AI adoption is no longer just about: ✔️ Productivity ✔️ Automation It’s about: ⚠️ Data ownership ⚠️ Model influence ⚠️ Trust boundaries Companies that win in the next 5 years will not just use AI — they will control how their data shapes AI. ✅ What You Should Do Today Review your Copilot settings Define internal AI usage policies Educate your team about prompt sensitivity #AI #GitHub #Copilot #DataPrivacy #AIFirst #CTO #Startups #SoftwareDevelopment
To view or add a comment, sign in
-
GitHub Copilot moves to usage-based billing June 1. This is not a Copilot story. It's a category story. Anthropic started. Copilot is just a bigger shoe. My first essay comparing the AI Bubble to being a founder in the DotCom bubble went up last week. I was finishing the second essay in Kuala Lumpur. I posted it to LinkedIn from Chiang Mai last night (my time). Then this email landed. From the essay: "This may not be the worst AI we will ever have. But there is a good chance it is the cheapest AI many people will see for years once subsidy of free capital departs and then scarcity and real operating costs settle into view." The shift is already underway elsewhere. Anthropic has been pushing on this for months: the Max plan plus required additional usage purchases for different types of work that feel like they should be included. OpenAI is moving more workloads from ChatGPT into Codex, where pricing tracks actual consumption. GitHub is the loudest move because Copilot's flat $10 plan was the worst aligned with actual usage of any major AI product. Heavy users were pulling hundreds of dollars of inference for that $10. Myself included, although I did pay $39/month. I shifted certain workloads that were just incredibly cheap to it. Even Microsoft, with the deepest pockets in the industry, isn't going to keep eating that gap inside a public company's earnings reports. If usage-based billing is sustainable here, expect it everywhere within twelve months. If it isn't, we'll find out which AI products were running on subsidy economics rather than real ones. The operator question doesn't change: which of your current AI workflows would still pencil at API rates? That's the audit worth doing this quarter, not in 2027. Full essay: https://lnkd.in/gewh_xcD #AIStrategy #FractionalCTO #DotCom
To view or add a comment, sign in
-
-
GitHub Copilot went from: "We can't take new users." To: "Pay per use." That's not a pricing update. That's a signal. When a product is capacity-constrained, it means demand outran infrastructure. That's a good problem. But it also means the old pricing model, flat subscription, unlimited usage, stopped making sense. Because some users were using a little. And some were using everything. Usage-based billing fixes that. The heavy users pay more. The light users pay less. The economics align with the value actually being delivered. But here's the more interesting implication. When AI coding tools move to usage-based pricing, the conversation inside every engineering org shifts. It's no longer "do we have Copilot?" It's "how much are we actually using it — and is the output worth what we're paying?" That's a harder question. And a healthier one. The teams that use it constantly and ship faster will justify the cost easily. The teams that had it running in the background, barely touched, on a flat subscription? Now they have to reckon with whether AI actually changed how they work. Or just felt like it did. Usage-based pricing doesn't just change what you pay. It forces honesty about what you got. #GitHub #Copilot #AI #Engineering #FutureOfWork
To view or add a comment, sign in
-
-
The era of "all you can consume" AI for developers is officially ending. Woke up to the news yesterday that GitHub Copilot starting June 1, 2026... is moving to usage-based billing. While Claude Code, Cursor and other tools have also followed. It's a fundamental shift in how we build with agents. I posed about this last year that the subsidization of LLM costs was not going to last too long. Here we are now, the compute demands have become unsustainable. A single agentic loop can burn more tokens than a developer used in an entire month under the old flat-rate model. For copilot this is what it will look like from June: - "Unlimited" is replaced by credits: Your $10/mo plan now gives you exactly $10 in "GitHub AI Credits." (Personal observation, I consume $10 easily in a 6-8 hours of use with Sonnet on Copilot) - Token-based billing: You’re paying for every input, output, and cached token you consume. - Code reviews will take from that budget and will also consume github runner minutes. Double whammy there. Why does this matter? Because it forces a move toward what I call "Efficient Agency." The old model, a good agent was one that eventually found the answer, regardless of how many tokens it burned. The new eval benchmark for the future will be solving the problem with the absolute minimum number of tokens. However I dont think this is a bad thing. This shift will finally flush out the "wasteful" agents that just loop until they hit a context limit. It's going to reward engineering craftsmanship over "vibe coding" loops. P.S. At Optimal AI, we’ve been obsessing over this for a while. We use smart model routing and multi-model techniques to keep quality high while keeping costs drastically lower. This is how we can continue to provide unlimited-style value in a usage-based world. #GitHubCopilot #AIEfficiency #EngineeringLeadership #LLMOps #OptimalAI
To view or add a comment, sign in
-
More from this author
Explore related topics
- Importance of Usage-Based Pricing for AI
- How AI Affects Startup Costs
- Impact of Github Copilot on Project Delivery
- How AI Affects Agency Pricing
- Understanding Copilot and AI Revenue Opportunities
- Understanding AI Costs for Developers
- How AI Can Reduce Developer Workload
- How to Reduce Generative AI Model Costs
- Tips for Reducing Costs in AI Development
- Why Use Domain-Specific LLM Wrappers in Enterprise AI
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Totally agree… Developers must not shred their developer skin and skill thats acquired with long efforts.. There will be a realization period coming up quickly with Increasing opex cost and ROI ….