When an organisation responsible for delivering nation‑wide government services moves fast with AI using GitHub Copilot, doing it securely and transparently is not optional, it is essential (and you can do it the same way - read below) This is why I find cplt project from Norway particularly interesting. It is an open source project built by NAV (Norwegian Labour and Welfare Administration) operating at a scale where trust, security, and reliability are absolutely critical. For them, adopting GitHub Copilot is not about experimentation, it is about enabling developers to move faster without compromising national‑level responsibilities. What cplt does is refreshingly pragmatic. It acts as a drop‑in sandbox wrapper for GitHub Copilot CLI on macOS, using Apple’s kernel‑level sandbox to ensure the AI agent can work on your codebase while access to secrets, credentials, and sensitive system resources is strictly controlled. No magic, no hand‑waving, just auditable, well‑documented security decisions you can actually read and reason about. I really appreciate the philosophy behind this project. It shows that “move fast” and “be secure” are not opposites, especially in the public sector. With the right engineering choices, strong defaults, and openness about trade‑offs, AI developer tools can be adopted responsibly even in environments where the stakes are very high. Ready to start? Here is the repo: https://lnkd.in/epj5B6V7 This is a great example of how open source, public sector engineering, and modern AI tooling can come together to raise the bar for everyone. 👏 Hats off to Hans Kristian Flaatten 🕊️🍉 and Nav team for building and sharing this to set a strong reference point for secure GitHub Copilot adoption. Morten Stange Bye, Haakon Hasli, Christian Tryti, Else Tefre, Francesco Manni, Jaime De Mora, Pankaj Agrawal, Muhammad Daniyal (Dani), Ömür Sert, Adil I., Cornelia Bjørke-Hill #GitHubCopilot #AINativeDevInfra #AINativeDevSecurity #DevSecOps
Maxim Salnikov’s Post
More Relevant Posts
-
🔥 80 % of Devs Feed AI Their Whole Codebase… Then Cry When It Leaks IP on GitHub Last Tuesday, GitHub quietly updated its Copilot terms: any repo marked “public” is now fair game for model training unless you hit a buried opt out. Same day, a startup in Miami woke up to find its proprietary payment gateway cloned line for line in an AI tutorial that ranks page one on Google. Why it matters 1. Public no longer means “ignore me.” If your repo is public, even “just for backup,” it’s training data. 2. Private repos on free tiers still get scanned for “security insights” unless you toggle two more switches. 3. Once your code trains the model, DMCA takedowns won’t erase the knowledge; it’s baked in. My take after 9 years I started backing every side project into a paid, zero knowledge Git host. Five bucks a month beats the cost of rewriting IP. For client work, we now run a “golden repo” rule: public only after a repo has been scrubbed of keys, logos, and anything that could end up in a competitor’s prompt. Here’s what this means for you as a business owner: treat GitHub like a public forum, not a flash drive. Mirror, don’t originate. What do you think? Overhyped or the end of open source as we know it? Check if your repos are still on “vintage” 2023 settings before your next commit. #TechNews #WebDevelopment #AI #WordPress #DigitalMarketing #Technology #GitHub #Copilot #CodeSecurity #FreelanceLife #DevTips #InfoSec #StartupLife #PrivacyMatters #TechTrends
To view or add a comment, sign in
-
🚀 How OpenClaw Became the Most-Starred GitHub Project in History 🚀 In just a few months, OpenClaw an open-source autonomous AI agent has rewritten the history of developer engagement on GitHub. 🌍✨ What makes this milestone astonishing is not just the stars but the pace and community enthusiasm behind them. 📌 Key Highlights: 🔥 Record-breaking Growth OpenClaw rose from its public launch in late 2025 to become the most-starred software project on GitHub by early 2026 surpassing tech giants like React and even Linux in less than four months. React took over a decade to accumulate its star count; OpenClaw did it in weeks. 💡 Why Developers Flocked to It OpenClaw isn’t just another library it’s an agent framework that lets you build autonomous workflows powered by large language models. It runs locally and integrates with tools like messaging apps, task managers, calendars, and more essentially automating digital life in ways many hadn’t imagined. 🌐 A Community-Powered Phenomenon The open-source community didn’t just star the repo they forked it, extended it with plugins, built ecosystems around it, and debated its implications for automation, privacy, and AI governance. That energy is what transformed a hobby project into a cultural moment. 🔁 Beyond the Numbers While stars are a vanity metric, the speed at which OpenClaw captured developer interest says something deeper: ➡️ People are excited about agentic AI that acts on their behalf ➡️ Control, extensibility, and self-hosting matter ➡️ Open source still drives innovation at scale 📣 Big Congrats to the OpenClaw community and creator Peter Steinberger! This is more than a GitHub milestone it’s a sign of where the future of software and productivity may be headed. #OpenSource #GitHub #AI #DeveloperCommunity #Innovation #OpenClaw
To view or add a comment, sign in
-
I rarely post like this, but this needs to be said. I’ve been a paying GitHub Copilot user for a while now, and the recent rate limits completely broke its value for serious engineering work. I’m not “vibe coding.” I’m working on complex, production-grade systems and currently building proprietary tooling in the offensive security space. In that environment, lightweight models and “auto mode” are not just insufficient - they’re irrelevant. I need consistent access to the most capable models to solve real problems. Instead, what we got is this: You pay full price, but hit rate limits after just a handful of meaningful requests to the only models that actually help (Opus, Sonnet, Codex). It genuinely feels like buying a full pizza… and being told you can eat one slice per week. This isn’t about wanting “more for free.” It’s about predictability, transparency, and alignment with how professionals actually work. If you’re in a similar situation, here are a few practical ways forward: LEAVE GITHUB COPILOT, save money and run your own local LLM. You thought I would suggest something else? It's the only way out. They would always rate-limit you even if you pay a lot of money. They have the technology and hold the power, meanwhile we're not in control. I still believe in AI-assisted development. It’s already changed how we build software. But pricing and rate-limiting strategies that ignore real-world usage patterns will push serious users away - not because we don’t want to pay, but because we can’t work like this. Curious how others are adapting - especially those working on large, complex systems.
To view or add a comment, sign in
-
I finally did it. I achieved the dream of the Agent Domini (AD) era: multiple Claude Code agents building stories in parallel. I am the "Lord of the Agents." I am… …completely blocked by GitHub Actions. 🛑 It turns out that while my AI agents can plan, code, and branch-hop at superhuman speeds, my CI/CD pipeline is still running on a human-scale budget. My "Included usage" of 2,000 minutes just evaporated in a couple of weeks. In the BC (Before Claude) world, the bottleneck was my brain. In the AD world, the bottleneck is my credit limit: “My One Ring." I have officially surrendered and switched to the GitHub Pro plan. If you’re moving toward a spec-driven, multi-agent workflow, here is a word of advice: Move to workflow dispatch for long-running GitHub actions and run as many tests locally as possible. It’s easy to melt “Your Precious” budget when your agents are working while you sleep. The fires of Mt. GPU run hot after all! My journey has been a whirlwind: - Three months ago: Living comfortably in my own little Shire, dismissing the “AI-hype.” - Three weeks ago: Figuring out how to assemble my first Fellowship of Agents. - Three days ago: I didn't even know the GitHub Actions limit existed. I wonder what I’ll learn tomorrow! Anyone else finding that their agents are outrunning limits beyond just tokens? What other bottlenecks should I look out for?
To view or add a comment, sign in
-
-
You've been training GitHub's AI for free. Your code. Your prompts. Your late nights. All of it. On April 24, 2026, GitHub's new Copilot policy goes live. Every developer on Free, Pro, and Pro+ gets opted in automatically. No warning. No consent. No payment. Here's exactly what GitHub is collecting: → Every prompt you type into Copilot → Every suggestion you accept or modify → Your file names and folder structure → Code context around your cursor → Your comments and documentation → How you navigate between files → Every Copilot chat conversation you've had The worst part? It's opt-out — not opt-in. They're betting you won't notice until it's too late. How to stop it before April 24: Go to github -> settings -> copilot Find "Allow GitHub to use my data for AI model training" Set it to Disabled Do this for every GitHub account you own Copilot Business and Enterprise users — you're protected. Free, Pro, Pro+ users — you are the product. Tag a developer who needs to see this. #github #developers #webdevelopment
To view or add a comment, sign in
-
GitHub Copilot is getting greedy when we needed it the most. Are we seeing the end of flat-rate AI? 🛑 If you have been trying to sign up for GitHub Copilot Pro or Pro+ this week, you probably noticed the “Unavailable” badge. It’s not a glitch. GitHub has officially suspended all new individual subscriptions. And the reasons why are exposing a massive crack in the AI infrastructure world. Here is what’s happening behind the scenes and why it matters to every developer: GitHub admits that “agentic workflows” have completely broken the economics of their service. Developers aren't just asking for simple auto-completions anymore. We are spinning up parallel, long-running agents that churn through massive contexts. According to GitHub’s VP of Product, a handful of these requests can now incur infrastructure costs that exceed a user's entire monthly subscription fee. To stop the bleeding, GitHub is enforcing aggressive new bottlenecks on existing users. They have introduced tight session and weekly token limits. This is separate from your "premium requests" allowance. You could have hundreds of requests left, but if your token usage hits the weekly cap (which can happen rapidly when pasting large logs or using agent modes), you will be locked out and receive a user_weekly_rate_limited error. The Shift to Token Billing 💰 The days of unlimited AI assistance for $10/month are ending. Leaked internal documents suggest Microsoft is preparing to move all Copilot subscribers to strict "token-based billing" as early as June 2026. Instead of flat rates, you'll likely pay a base subscription ($19 or $39) and receive a pooled allotment of tokens to spend. They are also removing access to the most powerful (and expensive) models. Anthropic's Claude Opus 4.5 and 4.6 are reportedly being stripped from the Pro+ subscriptions entirely. When we need these tools the most—to handle complex, agent-driven development—the providers are slamming the brakes because they underestimated the compute costs. #GitHubCopilot #SoftwareEngineering #TechNews #ArtificialIntelligence #Coding #DeveloperTools #Microsoft
To view or add a comment, sign in
-
-
exactly this. agent scale is a whole new thing. this is like the early days of the web, when we started crushing back ends with self-service transactions. agent business is the new ebusiness, and our backends are going to need to be rebuilt and rearchitected to accomodate that change. the next 100x is coming.
GitHub going down is not the story. The reason it went down is. My read: this is a demand signal dressed up as an infrastructure failure. AI agents are reviewing pull requests, writing code, merging changes, and running workflows at a rate no developer platform was designed for. Nobody got a six-month warning that this traffic was coming. It just arrived. This is what the AI transition looks like at the infrastructure layer. A step function. We're seeing the same pattern at Render. This morning, I was in a Slack channel with a company that sells AI agents to other businesses. They keep hitting our API rate limits. Their agents are doing a genuinely unprecedented volume of work. We raise the limits. Then we raise them again. And the message keeps coming back in very direct terms: give us more compute. Infrastructure built for human-paced usage is colliding with AI-paced usage. The next bottleneck is agent growth. Every company running critical systems will face some version of this. The ones that adapt will be fine. The rest will spend the next two years explaining their status page.
To view or add a comment, sign in
-
mapping end to end processes and thinking about what your new pressure points will be what we need to get good at, and quick! tools like value stream mapping can be super helpful here.
GitHub going down is not the story. The reason it went down is. My read: this is a demand signal dressed up as an infrastructure failure. AI agents are reviewing pull requests, writing code, merging changes, and running workflows at a rate no developer platform was designed for. Nobody got a six-month warning that this traffic was coming. It just arrived. This is what the AI transition looks like at the infrastructure layer. A step function. We're seeing the same pattern at Render. This morning, I was in a Slack channel with a company that sells AI agents to other businesses. They keep hitting our API rate limits. Their agents are doing a genuinely unprecedented volume of work. We raise the limits. Then we raise them again. And the message keeps coming back in very direct terms: give us more compute. Infrastructure built for human-paced usage is colliding with AI-paced usage. The next bottleneck is agent growth. Every company running critical systems will face some version of this. The ones that adapt will be fine. The rest will spend the next two years explaining their status page.
To view or add a comment, sign in
-
The backend load explosion with agents reminds me of what happened to banks with the shift to mobile banking. A technology unlocked a behavioral shift (dopamine powered), which increased demand on services by 10-40x factor. My talk track back then was something like this: "Once upon a time, you got into your horse and buggy and brought your paycheck to the local bank to deposit, maybe twice a month. At some point you got direct deposit and the bank built a website, so you checked it there instead. Then you got a phone and for some reason banks are seeing people check their accounts on payday 40 times to see if the check has cleared." True story. And as James said, it required a massive, multi-year rebuild/re-architecture to accomodate. That was just dopamine-fueled curiosity and a device that let us do something on the train or standing in line because we got bored after 7 seconds. This is going to be so much bigger.
GitHub going down is not the story. The reason it went down is. My read: this is a demand signal dressed up as an infrastructure failure. AI agents are reviewing pull requests, writing code, merging changes, and running workflows at a rate no developer platform was designed for. Nobody got a six-month warning that this traffic was coming. It just arrived. This is what the AI transition looks like at the infrastructure layer. A step function. We're seeing the same pattern at Render. This morning, I was in a Slack channel with a company that sells AI agents to other businesses. They keep hitting our API rate limits. Their agents are doing a genuinely unprecedented volume of work. We raise the limits. Then we raise them again. And the message keeps coming back in very direct terms: give us more compute. Infrastructure built for human-paced usage is colliding with AI-paced usage. The next bottleneck is agent growth. Every company running critical systems will face some version of this. The ones that adapt will be fine. The rest will spend the next two years explaining their status page.
To view or add a comment, sign in
More from this author
Explore related topics
- Open Source Tools for Autonomous AI Software Engineering
- Impact of Github Copilot on Project Delivery
- How AI Coding Tools Drive Rapid Adoption
- How to Adopt AI in Development
- How to Implement Copilot in Your Organization
- Assessing Copilot Adoption in Law Firms
- Top Security Risks of AI Copilots
- How to Use AI to Make Software Development Accessible
- How Copilot can Boost Your Productivity
- How Copilot can Support Business Workflows
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development