Your code is training GitHub's AI models right now. Unless you opted out. Today I checked my GitHub settings and found this: "Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models." Enabled. By default. This means every prompt you write in Copilot, every code suggestion you accept, every conversation — feeds into model training. Here's what most people miss: → This is separate from Copilot functioning. Disabling it doesn't break anything. → It only controls whether your data trains future models. → On Enterprise/Business plans, it's off by default. On personal — it's on. How to check: Settings → Copilot → "Allow GitHub to use my data for AI model training" → Disabled Takes 10 seconds. If you work with proprietary code, client projects, or anything sensitive — you probably want this off. Have you checked yours? #GitHub #Copilot #Privacy #AI #SoftwareEngineering
Sergei Stepanenko’s Post
More Relevant Posts
-
🚨 GitHub Copilot data shift just happened – and it’s bigger than you think 🤯 And most people will ignore it. Because this isn’t just about AI training. This is about who owns your work. Not just helping. Training on you. Developers writing code… Now becoming the dataset. The update? Copilot interaction data may be used to train models unless you opt out. Let me say that again: Your prompts. Your patterns. Your thinking. → Feedback loop for better AI. Sounds great… Until you ask: Where’s the line between user and training data? We’re entering a new phase: ● Tools that learn from you in real-time ● Productivity vs privacy trade-offs ● Silent defaults shaping the future I’ve found myself thinking: Convenience is winning. But at what cost? Because what used to be: “You use the tool” Is now: “The tool uses you too.” And this trend isn’t slowing down. Every AI product is racing toward: More data → Better models → More adoption Cycle repeats. Faster. So the real question is: Are we building tools… Or feeding them? Too early or too late? 👀 #ai #github #copilot #technology #developers #futureofwork #machinelearning
To view or add a comment, sign in
-
-
I did the thing. You know the one. I got a new laptop, set up my environment, and decided to let GitHub Copilot (in agent mode) loose on my local /dev directory to sync and push my recent "AI experiments" to a new repo. It was fast. It was efficient. It also pushed a hard-coded OpenAI API key straight to a public repo. 🤦♂️ The key is revoked, the damage is zero, but the "Why?" is what’s interesting. As I was cleaning up the mess, I realized that who we blame for this says a lot about how we view the future of engineering. Camp A: The "AI Skeptics" 🚩 The Take: "This is exactly why AI can't be trusted." They’ll argue that a tool capable of scanning a whole directory should have a "security-first" alignment. If it’s smart enough to write the code, it should be smart enough to recognize a sk- prefix and stop the push. To them, this isn't a user error; it's a fundamental failure of AI safety. Camp B: The "AI Optimists" 🚀 The Take: "Skill issue. The human is the pilot." They’ll say it’s 100% my fault. I put the key there. I gave the command. AI is an accelerator, not a babysitter. If you give a power tool to someone and they cut their finger off, you don't blame the saw—you blame the operator for not wearing gloves. The Real Question: As we move from "AI as a Chatbot" to "AI as an Agent" that takes actions on our behalf, where does the buck stop? Is the AI a Collaborator (which implies shared responsibility for "noticing" mistakes)? Or is it just a High-Speed Terminal (where the user is responsible for every single bit and byte)? I’m curious—if this happened on a team project, who are you looking at? The dev who left the key, or the "Agent" that didn't have the "common sense" to redact it? 🎤 #GenerativeAI #GitHubCopilot #AppSec #SoftwareEngineering #AIWorkflows #DevLife
To view or add a comment, sign in
-
Most AI tools that “analyze” GitHub repos sound impressive… until you realize they hallucinate confidently. They: – invent features – give generic summaries – never show where the answer came from So I built something stricter. 👉 Give it any repo 👉 Ask a question 👉 It answers ONLY with real file references No guessing. No fluff. Example: Q: where is the entry point A: [src/index.tsx] This file contains the application entry point. If the repo doesn’t contain the answer: 👉 “I could not find this in the repository.” No file reference → no answer. What surprised me while building this: – Prompt design mattered more than the model – Retrieval quality > model size – Without grounding, AI is just guessing fast What’s different here isn’t just RAG — it’s enforcement: • Every answer must cite files • No inference (“implied”, “likely”, etc.) • No mixed or partial answers • Hallucinations are blocked, not reduced Built with: – Node.js – LangChain – Ollama (local LLM + embeddings) – GitHub API This was a small build, but it changed how I think about AI systems: 👉 Accuracy > Intelligence Repo: https://lnkd.in/d_rR2Eei Curious how others are handling hallucination in dev tools 👇 #AI #RAG #MachineLearning #OpenSource #Developers #GitHub #SoftwareEngineering
To view or add a comment, sign in
-
-
With GitHub Copilot now enforcing strict limits on all of their plans, Anthropic switching all of their Enterprise customer from flat to token-based pricing and OpenAI doing it too, now you know why learning how to Code is an essential skill that works without the blessing of AI companies. Don't be stupid. Learn to code!
To view or add a comment, sign in
-
🧠 Tired of forgetting Kubernetes commands? What if you could just speak to your cluster in plain English? I built a Kubernetes AI Agent that lets you CRUD any K8s resource — Pods, Deployments, Services, Secrets, RBAC, HPA, Ingress, and more — just by describing what you want. ⚙️ How it works: • Describe your intent in plain English • The agent builds the kubectl command and YAML for you • Reviews it with you BEFORE running anything • You choose: approve ✅, refine ✏️, skip ⏭️, or quit ❌ • Not happy with the command? Give feedback and it regenerates just that step 🔒 Important: No kubeconfig is shared, no remote clusters are touched. Built for local clusters only — Minikube and Kind — making it a safe and practical learning tool for anyone getting started with Kubernetes. 🔗 GitHub: https://lnkd.in/gCNXcGji #Kubernetes #DevOps #AI #SRE #CloudNative #Minikube #Kind #kubectl
To view or add a comment, sign in
-
-
Microsoft changed GitHub Copilot terms to use your chat history for AI training. Opt-out only. No way to say "use this but not that." 🔓 Data unions flip this: you control what's used, researchers get anonymized access. Join us in building it. https://lnkd.in/ecnY-eQH
To view or add a comment, sign in
-
GitHub Copilot is moving to usage-based billing starting June 1, 2026. As Copilot moves from simple autocomplete to complex "AI agents," the way you pay is changing to reflect how much power the AI actually uses. The Key Changes: Credits: Instead of "requests," usage will be measured in GitHub AI Credits. Tokens: Costs will be calculated based on tokens (like words or characters) processed by the AI. Same Base Price: Subscription prices for Pro, Business, and Enterprise plans are not changing. Monthly Allotment: You still get a set amount of credits included each month. If you need more, you can buy extra. Next Steps: In early May, you can check your "Billing Overview" on GitHub to see a preview of how your current usage fits into the new system. Read the full announcement here: GitHub Blog: https://lnkd.in/gH2t_GCz #GitHub #Copilot #AI #TechNews
To view or add a comment, sign in
-
-
You probably think GitHub Copilot is just fancy autocomplete... But here's what most people miss: AI Skills aren't simple automation. They're fundamentally different. While batch files and traditional automation follow rigid, pre-programmed rules, AI Skills analyze your *entire codebase*. They detect custom base classes, identify architectural patterns, understand your minimal APIs, and recognize your unique conventions. Then they trigger intelligent actions based on natural language—not scripts. The practical implication? You're not just saving keystrokes. You're getting a coding partner that understands *your* code, not generic code. It adapts to your team's patterns, your project's architecture, your specific way of building things. This changes everything for developers and technical leaders. It's the difference between a tool that helps you write code faster and a tool that actually understands what you're trying to build. So here's my question: Are you leveraging AI Skills to work *with* your codebase's unique patterns, or are you still treating them like advanced autocomplete? #AI #GitHub #Development #CodingTools
To view or add a comment, sign in
-
👨💻𝐁𝐚𝐜𝐤 𝐭𝐨 𝐭𝐡𝐞 𝐜𝐨𝐫𝐞 𝐨𝐟 𝐰𝐡𝐚𝐭 𝐈 𝐥𝐢𝐤𝐞 𝐦𝐨𝐬𝐭! 👨💻 In the AI era, you are not anymore writing so many lines of code anymore. But the comprehension of what the Agents are writing for you it's still important. that's the difference between writing code for production and vibe coding. So, YES, it makes still sense to renew certifications related to dev topics! #microsoft #microsoftazure #dev #azure #AI #neverstoplearning #microsoftswitzerland #irlyma
To view or add a comment, sign in
-
-
I didn’t expect to write about this so early, but the AI buffet seems to be coming to an end. The moment that made it click for me was an email from GitHub about Copilot. Annual plans are being retired, premium request units are being replaced by AI credits, and future usage will be tied more directly to tokens, model cost, and in some cases GitHub Actions minutes. A few days earlier, GitHub had already paused new individual sign-ups, tightened usage limits, and reduced model availability. The explanation is fairly straightforward. GitHub Copilot is no longer just autocomplete in an editor. It is becoming an agentic platform that can run longer sessions, work across repositories, and consume a lot more compute than the original subscription model was designed to absorb. GitHub even noted that a handful of requests can now cost more than the plan price. This is not isolated to GitHub. OpenAI has also signaled that unlimited ChatGPT plans may not survive in their current form. Anthropic has added weekly limits to some Claude plans. The pattern is becoming clearer: early AI subscriptions helped drive adoption, but they also trained users to expect abundant usage at a predictable price. This has me thinking more seriously about local AI models, but also about the companies that have already built habits, workflows, and productivity assumptions around affordable AI access. If those costs rise materially, the question becomes less about whether AI is useful and more about how organizations control dependency, usage, and ROI. For teams already relying heavily on these tools, what is the play when the pricing model changes?
To view or add a comment, sign in
-
Explore related topics
- How to Train Custom Language Models
- How to Use AI to Make Software Development Accessible
- How to Use AI Code Suggestion Tools
- Impact of Github Copilot on Project Delivery
- How to Integrate AI in Software Development
- How to Use AI for Manual Coding Tasks
- How to Use AI Instead of Traditional Coding Skills
- How to Use AI Agents to Optimize Code
- Tips for AI-Assisted Programming
- How Developers can Use AI in the Terminal
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development