🚨 GitHub Copilot data shift just happened – and it’s bigger than you think 🤯 And most people will ignore it. Because this isn’t just about AI training. This is about who owns your work. Not just helping. Training on you. Developers writing code… Now becoming the dataset. The update? Copilot interaction data may be used to train models unless you opt out. Let me say that again: Your prompts. Your patterns. Your thinking. → Feedback loop for better AI. Sounds great… Until you ask: Where’s the line between user and training data? We’re entering a new phase: ● Tools that learn from you in real-time ● Productivity vs privacy trade-offs ● Silent defaults shaping the future I’ve found myself thinking: Convenience is winning. But at what cost? Because what used to be: “You use the tool” Is now: “The tool uses you too.” And this trend isn’t slowing down. Every AI product is racing toward: More data → Better models → More adoption Cycle repeats. Faster. So the real question is: Are we building tools… Or feeding them? Too early or too late? 👀 #ai #github #copilot #technology #developers #futureofwork #machinelearning
M Prashant Rao’s Post
More Relevant Posts
-
The era of unlimited AI coding tools is quietly coming to an end. 🚨 Both Claude Code and GitHub Copilot hit major turbulence this week, and the reasons tell us a lot about where AI is headed. 𝗪𝗵𝗮𝘁 𝗵𝗮𝗽𝗽𝗲𝗻𝗲𝗱: • GitHub Copilot froze new signups for Pro, Pro+, and Student plans • Anthropic briefly pulled Claude Code from its $20/month Pro tier • Usage limits tightened. Premium models quietly removed from lower plans. 𝗧𝗵𝗲 𝗿𝗲𝗮𝗹 𝗽𝗿𝗼𝗯𝗹𝗲𝗺? Agentic AI. Developers aren't just asking for code snippets anymore, they're running autonomous agents that execute long, complex workflows for hours. A handful of user sessions can now cost more than an entire monthly subscription. Flat-rate pricing was built for a world that no longer exists. 𝗪𝗵𝗮𝘁'𝘀 𝗰𝗼𝗺𝗶𝗻𝗴 𝗻𝗲𝘅𝘁: • Token-based billing (Microsoft has already planned this for June) • Tiered access to powerful models based on what you pay • Potential removal of agentic features from entry-level plans • Pricing models that reflect actual compute costs The uncomfortable truth: the tools developers have come to rely on daily are about to get more expensive, or more restricted. The companies that adapt their workflows now will be far better positioned than those caught off guard when the pricing hammer drops. Are you rethinking your AI tooling strategy? 👇 #AI #DeveloperTools #ClaudeCode #GitHubCopilot #AgenticAI #SoftwareDevelopment
To view or add a comment, sign in
-
-
I did the thing. You know the one. I got a new laptop, set up my environment, and decided to let GitHub Copilot (in agent mode) loose on my local /dev directory to sync and push my recent "AI experiments" to a new repo. It was fast. It was efficient. It also pushed a hard-coded OpenAI API key straight to a public repo. 🤦♂️ The key is revoked, the damage is zero, but the "Why?" is what’s interesting. As I was cleaning up the mess, I realized that who we blame for this says a lot about how we view the future of engineering. Camp A: The "AI Skeptics" 🚩 The Take: "This is exactly why AI can't be trusted." They’ll argue that a tool capable of scanning a whole directory should have a "security-first" alignment. If it’s smart enough to write the code, it should be smart enough to recognize a sk- prefix and stop the push. To them, this isn't a user error; it's a fundamental failure of AI safety. Camp B: The "AI Optimists" 🚀 The Take: "Skill issue. The human is the pilot." They’ll say it’s 100% my fault. I put the key there. I gave the command. AI is an accelerator, not a babysitter. If you give a power tool to someone and they cut their finger off, you don't blame the saw—you blame the operator for not wearing gloves. The Real Question: As we move from "AI as a Chatbot" to "AI as an Agent" that takes actions on our behalf, where does the buck stop? Is the AI a Collaborator (which implies shared responsibility for "noticing" mistakes)? Or is it just a High-Speed Terminal (where the user is responsible for every single bit and byte)? I’m curious—if this happened on a team project, who are you looking at? The dev who left the key, or the "Agent" that didn't have the "common sense" to redact it? 🎤 #GenerativeAI #GitHubCopilot #AppSec #SoftwareEngineering #AIWorkflows #DevLife
To view or add a comment, sign in
-
GitHub Adds “Rubber Duck” Review Agent to Copilot CLI GitHub has launched an experimental “Rubber Duck” mode in Copilot CLI, bringing a second AI model into the loop to review, challenge, and validate the primary agent’s work before execution. What’s interesting isn’t just the feature - it’s the pattern. 🔹 Second Opinion by Design: A separate model from a different AI family evaluates plans before they run. 🔹 Focused Review Layer: It flags missed assumptions, edge cases, and hidden risks. 🔹 Better Outcomes on Complex Tasks: Especially effective on multi-file, high-step problems where errors compound. 🔹 Agent + Reviewer Pattern: Introduces a structured “builder + critic” dynamic inside AI workflows. As agents become more autonomous, the risk isn’t that they can’t execute - it’s that they execute flawed plans too confidently. Rubber Duck introduces friction in the right place: before things break. At GlenFlow, we see this as a natural next step in agentic development. Not just more powerful agents but systems of agents that challenge each other. Because in an AI-native workflow, quality won’t come from a single smarter model - it’ll come from orchestrated disagreement. Read more: https://lnkd.in/dUwd5dms #AI #GitHubCopilot #AICoding #AgenticAI #DevTools #SoftwareEngineering #FutureOfWork #GlenFlow
To view or add a comment, sign in
-
-
You probably think GitHub Copilot is just fancy autocomplete... But here's what most people miss: AI Skills aren't simple automation. They're fundamentally different. While batch files and traditional automation follow rigid, pre-programmed rules, AI Skills analyze your *entire codebase*. They detect custom base classes, identify architectural patterns, understand your minimal APIs, and recognize your unique conventions. Then they trigger intelligent actions based on natural language—not scripts. The practical implication? You're not just saving keystrokes. You're getting a coding partner that understands *your* code, not generic code. It adapts to your team's patterns, your project's architecture, your specific way of building things. This changes everything for developers and technical leaders. It's the difference between a tool that helps you write code faster and a tool that actually understands what you're trying to build. So here's my question: Are you leveraging AI Skills to work *with* your codebase's unique patterns, or are you still treating them like advanced autocomplete? #AI #GitHub #Development #CodingTools
To view or add a comment, sign in
-
Your code is training GitHub's AI models right now. Unless you opted out. Today I checked my GitHub settings and found this: "Allow GitHub to collect and use my Inputs, Outputs, and associated context to train and improve AI models." Enabled. By default. This means every prompt you write in Copilot, every code suggestion you accept, every conversation — feeds into model training. Here's what most people miss: → This is separate from Copilot functioning. Disabling it doesn't break anything. → It only controls whether your data trains future models. → On Enterprise/Business plans, it's off by default. On personal — it's on. How to check: Settings → Copilot → "Allow GitHub to use my data for AI model training" → Disabled Takes 10 seconds. If you work with proprietary code, client projects, or anything sensitive — you probably want this off. Have you checked yours? #GitHub #Copilot #Privacy #AI #SoftwareEngineering
To view or add a comment, sign in
-
-
Andrej Karpathy’s “LLM wiki” vision just became a real app for macOS 🤯 And it might quietly redefine how humans work with AI agents. It’s called Tolaria. Free. Open source. Local-first. No cloud. No subscriptions. No proprietary lock-in. Just markdown files and AI agents working together. That simplicity is the breakthrough. Because this isn’t just another note-taking app. This is infrastructure for AI-native thinking. Every vault is a Git repository. Meaning: Built-in version history. Transparent changes. Human-readable knowledge. And everything stays plain markdown with YAML frontmatter. No black boxes. That matters more than people realize. Most AI tools today trap your data inside proprietary systems. Tolaria does the opposite. It treats knowledge like code. And here’s what chills me most 👇 The creator reportedly runs a vault of 10,000 notes through AI agents every single day. Agents create notes. Edit them. Connect ideas. Follow instructions through AGENTS files. Not just retrieval. Autonomous knowledge work. 📊 The engineering stats are insane: 2,000 commits shipped 100K+ lines of code 3,000+ tests 85% coverage 9.9/10 code health 70+ architecture decision records This isn’t a prototype. This is a blueprint. And the deeper signal is impossible to ignore: We’re moving from “AI tools” to “AI operating environments.” Spaces where humans and agents co-build memory, workflows, and decisions side by side. Not replacing thought. Extending it. What GitHub did for software collaboration… apps like Tolaria may do for human intelligence itself. ⚡ Imagine this in 3–5 years: Your second brain. Your research assistant. Your memory system. Your autonomous agents. All living in one evolving knowledge graph. Local. Versioned. AI-native. So the real question is— Will future operating systems be designed for humans… or for humans + AI agents together? #ArtificialIntelligence #AI #AndrejKarpathy #OpenSource #KnowledgeManagement #AIAgents #MachineLearning #FutureOfWork #Markdown #MacOS
To view or add a comment, sign in
-
-
✨🛠️ 𝗖𝗼𝗱𝗲𝘅 𝗶𝘀 𝗡𝗼𝘄 𝗕𝗲𝘁𝘁𝗲𝗿 𝗧𝗵𝗮𝗻 𝗘𝘃𝗲𝗿. 𝗪𝗵𝗮𝘁 𝗱𝗶𝗱 𝗢𝗽𝗲𝗻𝗔𝗜 𝗷𝘂𝘀𝘁 𝘂𝗽𝗱𝗮𝘁𝗲? They made Codex much more powerful. Instead of just helping you write code, it can now act more like a digital worker. Here’s what that means in simple terms: 🤖 𝟭. 𝗪𝗼𝗿𝗸𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗯𝗮𝗰𝗸𝗴𝗿𝗼𝘂𝗻𝗱 • It can run while you’re doing other things on your computer • Doesn’t interrupt your work 🧑💻 𝟮. 𝗖𝗮𝗻 𝗰𝗼𝗻𝘁𝗿𝗼𝗹 𝘆𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝘂𝘁𝗲𝗿 (𝗸𝗶𝗻𝗱 𝗼𝗳) • Open apps • Click buttons • Type for you 👉 Like a helper using your computer ⚡ 𝟯. 𝗠𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 𝗮𝘁 𝗼𝗻𝗰𝗲 • It can run several tasks in parallel 👉 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: writing code + testing + debugging at the same time 🌐 𝟰. 𝗕𝘂𝗶𝗹𝘁-𝗶𝗻 𝗯𝗿𝗼𝘄𝘀𝗲𝗿 • Helps build websites or games directly 🧠 𝟱. 𝗠𝗲𝗺𝗼𝗿𝘆 • Remembers what you worked on earlier 👉 So you don’t need to repeat everything 🎨 𝟲. 𝗜𝗺𝗮𝗴𝗲 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 • Can create mockups or design ideas 👉 Useful for apps or products 🔌 𝟳. 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝘀 𝘄𝗶𝘁𝗵 𝗼𝘁𝗵𝗲𝗿 𝘁𝗼𝗼𝗹𝘀 • Works with apps like Slack, Google Calendar, GitLab, etc. 👉 So it can actually do tasks across your workflow 🧩 𝗪𝗵𝗮𝘁’𝘀 𝘁𝗵𝗲 𝗯𝗶𝗴 𝘀𝗵𝗶𝗳𝘁? 𝗕𝗲𝗳𝗼𝗿𝗲: 👉 AI = “help me write code” 𝗡𝗼𝘄: 👉 AI = “𝗱𝗼 𝘁𝗵𝗲 𝘄𝗼𝗿𝗸 𝗳𝗼𝗿 𝗺𝗲” 𝗘𝘅𝗮𝗺𝗽𝗹𝗲: • Read your Slack messages • Check your calendar • Create a to-do list • Then actually execute tasks That’s why it’s called a “𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗮𝗴𝗲𝗻𝘁” — not just a coding assistant. Ashu Tosh Bhardwaj Amit Singal HARSHENDU SHRIDHAR Sachit Wadhawan #AI #MachineLearning #ArtificialIntelligence #GenAI #LLMs #AIArchitecture #SoftwareArchitecture #SystemDesign #MLOps #LLMOps #AIInfrastructure #AgenticAI #AIOrchestration #NeuralNetworks #DeepLearning #ModelArchitecture #EnterpriseAI #OpenAI #TechInnovation #DigitalTransformation #CloudArchitecture #AIGateway #WorkflowAutomation #FutureOfTech
To view or add a comment, sign in
-
-
If you'd told me two years ago that I'd be telling non-technical executives to learn markdown and Git, I would have laughed. These are developer tools. Or they were. Here's what changed: every AI agent I use — Claude Code, Copilot, custom workflows — stores its knowledge, memory, and instructions in plain text files. Markdown and version control. That's it. The skills, the context, the accumulated knowledge that makes your AI agent actually useful? It lives in files you can read, edit, and share. Not locked inside a model. Not hidden behind an API. This means: → Your AI agent's "brain" is portable across models → Anyone who can write structured text can teach an agent → The technical barrier to building AI workflows is lower than most people think I have a favorite markdown editor now (Typora, if you're curious). I version-control my AI workflows through GitHub. Two years ago these were niche developer tools. Now they're how I get work done. The skills that matter in the AI era aren't always the ones you'd expect. #AI #Productivity #FutureOfWork #Skills #AgenticAI
To view or add a comment, sign in
-
I had a spare 10 minuites at work with Copilot yesterday and 5 minuites with Gemini NotebookLM today waking thoughts 🙂 - 🚀 How do you empower your team to experiment with AI and automation without risking sensitive data or breaking the budget? Copilot, Gemini and I have been exploring a highly effective framework: an Azure LLM R&D Sandbox. This is a low-cost, tightly controlled internal environment designed to safely enable experimentation with local LLMs, automation, and collaborative Python learning. Instead of top-down, expensive AI rollouts, this is a people-first capability investment designed to encourage bottom-up improvement ideas from the staff closest to operational challenges. Here is why this model is a win for IT, Security, Finance, and HR alike: 🛡️ Zero Risk to Production: The sandbox operates with strict guardrails—no customer data, no public exposure, and no external AI services. It relies solely on CPU-only local models (like Gemma small) and synthetic or anonymized examples. 💰 Highly Cost-Effective: The entire platform runs on a single shared VM and costs under £1,000 per year for 20 staff members—that's roughly £47 per user annually. ⏱️ Time-Bound Experimentation: Staff are given a maximum of 2 hours per week to use a shared JupyterHub environment to learn Python, test automation concepts, and problem-solve together. This strict timeboxing ensures it never distracts from core operational responsibilities. 📈 Future-Ready Upskilling: The primary focus is on generating learning artifacts, not just tools. Even if a proof-of-concept is discarded, the foundational skills gained in computational thinking and digital literacy remain. Innovation doesn't always require massive investments or taking on huge risks. By providing protected time and safe tools, you can evidence value and upskill your workforce before ever needing to scale. How is your organization safely testing out local LLMs and automation? Let's discuss in the comments! 👇 #AI #Azure #TechLeadership #Innovation #Upskilling #LocalLLM #PythonLearning #FutureOfWork #Inspire
To view or add a comment, sign in
-
-
AI is writing nearly 50% of all code globally right now. And demand for developers is still rising. Nobody talks about why. Here's the actual reason. ——— More AI-generated code = more software being built. More software being built = more demand for people who can review, architect, and ship it. The constraint isn't code output anymore. It's judgment. ——— The numbers behind this: → GitHub: 43M pull requests merged per month — up 23% year-over-year → Developer commits: jumped 25% to 1 billion annually → Google + Microsoft: 25–30% of internal code is already AI-assisted The volume of software exploding — not contracting. ——— But here's the part that doesn't get talked about: 45% of AI-generated code contains security vulnerabilities. Teams using AI heavily are seeing higher code churn. AI generates fast. Developers who can't review critically are shipping problems they can't see. ——— The new developer split happening in 2026: Group A: Uses AI to generate more code, faster. Ships more bugs. Gets outpaced eventually. Group B: Uses AI to generate code, reviews critically, catches what AI misses. Ships better products faster. Same tools. Completely different outcomes. ——— Which group are you in? And which one are you actively building toward? Drop it honest. 👇 #SoftwareDevelopment #AIEngineering #Developers #GitHub #CodeReview #AIProductivity #TechCareers
To view or add a comment, sign in
Explore related topics
- How Copilot can Boost Your Productivity
- Impact of Github Copilot on Project Delivery
- Machine Learning in Employee Engagement
- Understanding AI Training's Impact on User Data
- Leadership Prompts for Copilot Users
- How to Implement Copilot in Your Organization
- Sharing Data Responsibly In AI Model Training
- How Copilot can Support Business Workflows
- Understanding Copilot and AI Revenue Opportunities
- How to Transform Workflows With Copilot
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development