You've been training GitHub's AI for free. Your code. Your prompts. Your late nights. All of it. On April 24, 2026, GitHub's new Copilot policy goes live. Every developer on Free, Pro, and Pro+ gets opted in automatically. No warning. No consent. No payment. Here's exactly what GitHub is collecting: → Every prompt you type into Copilot → Every suggestion you accept or modify → Your file names and folder structure → Code context around your cursor → Your comments and documentation → How you navigate between files → Every Copilot chat conversation you've had The worst part? It's opt-out — not opt-in. They're betting you won't notice until it's too late. How to stop it before April 24: Go to github -> settings -> copilot Find "Allow GitHub to use my data for AI model training" Set it to Disabled Do this for every GitHub account you own Copilot Business and Enterprise users — you're protected. Free, Pro, Pro+ users — you are the product. Tag a developer who needs to see this. #github #developers #webdevelopment
GitHub's Copilot Policy: Opt-out Before April 24
More Relevant Posts
-
GitHub Copilot just announced usage limits. I wrote about the solution two months ago. This week GitHub paused new sign-ups and tightened usage limits for Copilot — citing agentic workflows consuming far more resources than the original pricing model could handle. Here's the thing: this is exactly the problem I was trying to solve when I wrote my blog post back in March. I noticed I was defaulting to Opus models for everything — file listings, simple queries, complex architecture decisions — all treated the same. So I ran an experiment: I had all 17 GitHub Copilot CLI models evaluate each other anonymously using the LLM Council technique, to figure out which model is actually right for which task. The conclusion was straightforward: - Opus = deep reasoning, architecture, the expensive senior engineer - Sonnet = solid default for most coding tasks - Haiku / Mini = fast execution for simple, well-defined work - Codex = precision coding and terminal workflows Match model tier to task complexity. Use fast/cheap models for 80% of tasks, escalate for the 20% that actually need it. This can be fully automated with agent instructions. GitHub's new limits are essentially forcing that discipline. But you don't have to wait to get hit by a limit — you can set it up intentionally. GitHub's announcement: https://lnkd.in/dk3RZQQb My full writeup with the experiment, results, and model selection instructions: https://lnkd.in/dysdTKSJ Have you thought about which model you're defaulting to — and whether it's actually the right one for the job? #GitHubCopilot #AI #DeveloperTools #Microsoft
To view or add a comment, sign in
-
GitHub Copilot is getting greedy when we needed it the most. Are we seeing the end of flat-rate AI? 🛑 If you have been trying to sign up for GitHub Copilot Pro or Pro+ this week, you probably noticed the “Unavailable” badge. It’s not a glitch. GitHub has officially suspended all new individual subscriptions. And the reasons why are exposing a massive crack in the AI infrastructure world. Here is what’s happening behind the scenes and why it matters to every developer: GitHub admits that “agentic workflows” have completely broken the economics of their service. Developers aren't just asking for simple auto-completions anymore. We are spinning up parallel, long-running agents that churn through massive contexts. According to GitHub’s VP of Product, a handful of these requests can now incur infrastructure costs that exceed a user's entire monthly subscription fee. To stop the bleeding, GitHub is enforcing aggressive new bottlenecks on existing users. They have introduced tight session and weekly token limits. This is separate from your "premium requests" allowance. You could have hundreds of requests left, but if your token usage hits the weekly cap (which can happen rapidly when pasting large logs or using agent modes), you will be locked out and receive a user_weekly_rate_limited error. The Shift to Token Billing 💰 The days of unlimited AI assistance for $10/month are ending. Leaked internal documents suggest Microsoft is preparing to move all Copilot subscribers to strict "token-based billing" as early as June 2026. Instead of flat rates, you'll likely pay a base subscription ($19 or $39) and receive a pooled allotment of tokens to spend. They are also removing access to the most powerful (and expensive) models. Anthropic's Claude Opus 4.5 and 4.6 are reportedly being stripped from the Pro+ subscriptions entirely. When we need these tools the most—to handle complex, agent-driven development—the providers are slamming the brakes because they underestimated the compute costs. #GitHubCopilot #SoftwareEngineering #TechNews #ArtificialIntelligence #Coding #DeveloperTools #Microsoft
To view or add a comment, sign in
-
-
🚨 GitHub Copilot Pricing Is Changing GitHub recently announced that starting June 1, 2026, all Copilot plans will move to a usage-based billing model—a shift that reflects how much Copilot has grown beyond a simple coding assistant. -Why they are making this change Copilot today is very different from what it was a year ago. It has evolved into a more agent-like AI, capable of running long, multi-step coding sessions and working across entire repositories. The issue with the current model is that a quick question and a multi-hour coding session could cost the same, which isn’t sustainable given the increasing compute demands. GitHub has been absorbing much of this cost, but with growing usage, that approach no longer scales. Moving to usage-based billing helps align pricing with actual usage and ensures long-term reliability. -What’s changing The biggest shift is moving away from Premium Request Units (PRUs) to GitHub AI Credits. These credits are calculated based on token usage, including: Input tokens Output tokens Cached tokens -At the same time, some important things remain unchanged: Copilot Pro – $10/month Copilot Pro+ – $39/month Copilot Business – $19/user/month Copilot Enterprise – $39/user/month And importantly, code completions and Next Edit suggestions will still be included without consuming credits. To help users prepare, GitHub is also rolling out a preview billing dashboard in early May, so individuals and teams can estimate costs before the transition. ⚠️ Key differences A few changes will directly impact how developers use Copilot: -The fallback system will be removed, meaning usage depends strictly on available credits or admin budgets -Copilot code review will now consume both AI Credits and GitHub Actions minutes -Usage tracking becomes more detailed and tied directly to how intensively you use AI features Overall, this feels like a move toward more transparent and scalable AI pricing, but it also means developers and teams will need to be more conscious about usage and cost management. #GitHub #Copilot #AI #SoftwareDevelopment #Developers #TechNews #ArtificialIntelligence #Programming
To view or add a comment, sign in
-
-
Developers: you may want to check your GitHub settings before April 24. GitHub is updating its policy so interactions with personal repositories may be used for AI model training. If you’re using personal repos and don’t want that data included, you’ll need to opt out manually. Copilot Business and Enterprise users are not affected. Official announcement: https://lnkd.in/eMXCDsuF To opt out: Profile → Settings → Copilot → Features → Privacy Are you opting out, or are you fine with your repos being used for training? On one hand, they are public. Any unscrupulous actor could already be using them. If you're already using GitHub Coding Agent, it may improve your experience. An option to differentiate training on public vs private repos might make the decision easier. The blog announcement linked above includes this statement under what will not be used for training irrespective of your choice: *Content from your issues, discussions, or private repositories at rest. We use the phrase “at rest” deliberately because Copilot does process code from private repositories when you are actively using Copilot. This interaction data is required to run the service and could be used for model training unless you opt out.* Interaction data is defined as: "specifically inputs, outputs, code snippets, and associated context." That sounds like it includes commits. #github #ai #developer #dataprivacy #softwaredevelopment
To view or add a comment, sign in
-
-
GitHub Copilot just changed how it charges, and I think it's worth paying attention to. Starting June 1, every Copilot plan moves to usage-based billing through GitHub AI Credits. The plan prices stay the same on paper, but your actual bill won't. Token usage; input, output, cached all counts now. Heavy users are going to notice this quickly, and teams will need to start thinking about spend controls in a way they never had to before. What's quietly buried in GitHub's blog post is the honest admission: Copilot got expensive to run, and GitHub was absorbing that cost. That arrangement is done. The fallback models are gone. The premium request buffer is gone. Even code review now pulls from both AI Credits and GitHub Actions minutes. This isn't a minor update. The economics of AI development tools are genuinely shifting. For a long time, AI tooling felt almost free, vendors were subsidizing access to win market share. That made sense as a growth strategy. It doesn't scale forever, and we're watching it unwind in real time. Copilot is just the most visible example right now. The question for developers and engineering teams isn't really whether to use AI anymore. It's whether you're using it in a way that's sustainable when the bill actually reflects the cost. Worth thinking about before June.
To view or add a comment, sign in
-
This is a great callout. The economics of AI tools are clearly shifting, and it’s going to force more discipline in how teams use them. It also feels like a return to an old reality: where you could only optimize for two of three - speed, quality, or cost. When AI felt “free,” teams could stretch across all three. That’s likely changing. This is why quality should be the focus. If AI is just speeding up coding, costs will add up quickly and it may be better to operate without it since many of the bottlenecks elsewhere in the process will still be there. If it’s improving quality, reducing rework, and enabling engineers to operate more end-to-end, you still come out ahead - and avoid paying for the same work twice. Teams will have to be more intentional about how they use AI. That’s where it starts to separate teams that are actually getting leverage from those that aren’t.
GitHub Copilot just changed how it charges, and I think it's worth paying attention to. Starting June 1, every Copilot plan moves to usage-based billing through GitHub AI Credits. The plan prices stay the same on paper, but your actual bill won't. Token usage; input, output, cached all counts now. Heavy users are going to notice this quickly, and teams will need to start thinking about spend controls in a way they never had to before. What's quietly buried in GitHub's blog post is the honest admission: Copilot got expensive to run, and GitHub was absorbing that cost. That arrangement is done. The fallback models are gone. The premium request buffer is gone. Even code review now pulls from both AI Credits and GitHub Actions minutes. This isn't a minor update. The economics of AI development tools are genuinely shifting. For a long time, AI tooling felt almost free, vendors were subsidizing access to win market share. That made sense as a growth strategy. It doesn't scale forever, and we're watching it unwind in real time. Copilot is just the most visible example right now. The question for developers and engineering teams isn't really whether to use AI anymore. It's whether you're using it in a way that's sustainable when the bill actually reflects the cost. Worth thinking about before June.
To view or add a comment, sign in
-
GitHub Copilot just made a quiet but significant move — and it reveals a structural problem worth understanding. This week, GitHub removed Claude Opus from its Pro plan entirely. Opus 4.7 is now Pro+ only, at a 7.5x premium request multiplier ("promotional" until April 30). For context: the standard Opus multiplier was 3x. That's a 150% price jump on their most capable model, with limited advance notice. New Pro and Pro+ sign-ups are also temporarily paused as of April 20. This isn't a pricing complaint. It's a trust and architecture problem. Copilot has evolved into a multi-model marketplace — GPT, Claude, Gemini, Grok all available. That's genuinely useful. But it also means Copilot doesn't control the economics of the models it resells. When Anthropic prices Opus at frontier rates, Copilot absorbs that cost and passes it on — unpredictably, mid-cycle, with multipliers that change without warning. Developers in GitHub's own community forums documented costs jumping from 1x to 3x overnight on Opus 4.5 with no prior communication. One team wrote: "seeing costs jump from 1x to 3x without any prior communication creates a difficult situation." This is the core tension: Copilot's strength is distribution — IDE integration, GitHub context, enterprise rollout. Its weakness is that it doesn't own the intelligence it's built on. For boilerplate and autocomplete, this is fine. But for agentic workflows, deep debugging, and architecture decisions — the tasks where model capability actually changes outcomes — unpredictable access to frontier models is a reliability problem, not just a pricing complaint. The teams I've seen migrate to Claude Code or direct API access aren't doing it for raw capability. They're doing it for predictability and control. As AI becomes load-bearing in engineering workflows, those two things will matter just as much as benchmark scores.
To view or add a comment, sign in
-
GitHub has paused new sign-ups for Copilot Pro, Pro+, and Student, tightened usage limits on individual plans, and pulled Opus access from Pro. The reason is blunt and familiar to anyone who has run shared platforms at scale: agentic workflows are chewing through far more compute than the original plan model was built for. This matters because it is another reminder that AI pricing, quotas, and service design are still catching up to real-world usage. Once users start running long sessions, parallel tasks, and subagents, the nice tidy commercial model meets production reality. Usually at 2 a.m. and with a status page open. The practical takeaway is simple. If your teams are leaning on AI coding tools, treat them like constrained infrastructure, not magic. Put guardrails around model choice, parallel execution, and cost visibility. Build fallback workflows for when quotas hit. And do not base delivery promises on "the assistant will handle it" unless you know exactly where the limits are. The interesting part is not that GitHub changed a plan. It is that we are now seeing the operational bill for agentic AI arrive in public. #GitHub #Copilot #AICodeAssistants #DevOps #PlatformEngineering #CloudCost #SecurityOperations
To view or add a comment, sign in
-
Found a great resource for those on a GitHub Copilot journey - no matter what UI you put on top - gravity always seems to pull back to the CLI/"Command Line" - the UI that's stood the test of time :) - Quick Start - First Steps - Context and Conversations - Development Workflows - Create Specialized AI Assistants - Automate Repetitive Tasks - Connect to GitHub, Databases & APIs - Putting It All Together https://lnkd.in/ec4hj5Bg #GitHub
To view or add a comment, sign in
Explore related topics
- How Developers can Use AI in the Terminal
- How to Protect Personal Data in AI Training
- How to Support Developers With AI
- Impact of Github Copilot on Project Delivery
- How to Manage AI Training Data Privacy Settings
- How to Use Git for IT Professionals
- How to Use AI Code Suggestion Tools
- How to Use AI for Manual Coding Tasks
- Understanding Copilot and AI Revenue Opportunities
- How to Integrate AI in Software Development
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Source Link :https://github.blog/news-insights/company-news/updates-to-github-copilot-interaction-data-usage-policy/