GitHub is sitting in a strange spot right now: critical infra for almost every dev team, but struggling with reliability just as AI agents are flooding it with new load. Availability dropping to "one nine" and a steady stream of outages point to infra that was built for humans, not thousands of bots spinning up repos and hammering APIs in the background. At the same time, a tiny startup like Pierre Computer claims to handle repo creation at a scale that looks tailor-made for agents, not people. If GitHub wants to stay the top git platform for AI-native development, it has to treat agent traffic as first-class. That means an AI-native git layer, better scaling of stateful systems like databases and Redis, and a clear North Star around being the backbone for agentic code lifecycles. The current mix of Copilot branding, internal politics, and no CEO naturally pulls attention away from the boring but essential work of hardening the platform. But it is also worth being cautious with the clean narrative. GitHub runs a very different workload from a greenfield product in closed beta, with years of baggage, enterprise constraints, and a massive ecosystem to keep stable. Self-reported numbers from a startup and a rough month of incidents are not enough on their own to declare the incumbent broken or the new model proven. Shutting down Copilot or slicing away half the product surface sounds decisive, yet could throw away real value while the market is still figuring out how devs and agents should work together. The useful takeaway is not that GitHub is doomed or that an AI-only platform will automatically win, but that infrastructure and product strategy now have to be designed around agents and humans coexisting at scale. Getting that tradeoff right - reliability for everyone, while building new, agent-native primitives with a clear focus - will matter a lot more than any single outage or launch over the next few years. https://lnkd.in/dECY42Vt
Gary Orendi’s Post
More Relevant Posts
-
Back in November I looked at a problem and thought "that's going to be fun to solve." GitHub Copilot CLI running inside a Docker sandbox needs Docker access. Testcontainers, integration tests, build pipelines. They all need a working Docker socket. The obvious answer? Mount /var/run/docker.sock into the container. The obvious answer is also terrifying. That socket is root access to your host machine. Any image, privileged containers, host filesystem mounts. For a human dev, you trust yourself. For Copilot running autonomously... not so much. Last year I built an Airlock feature that hardens network traffic, routing everything through an allowlist-enforcing proxy. That was step one. The Docker socket broker was the piece I kept putting off because the problem was harder. The broker sits between the container and the real Docker daemon. Every API call goes through it. 65 endpoints explicitly allowed, everything else blocked. When Copilot tries to create a container, the broker inspects the body: checks the image against an allowlist (empty by default, you name what you trust), blocks privileged mode, blocks host namespace sharing, blocks mounts to /etc, /root, /var, and the socket itself. Combine it with the Airlock I built last year and sibling containers spawned by Copilot get auto-joined to the isolated network too. Network-level and API-level lockdown at the same time. It wasn't one of those "throw a single prompt at it and it's solved" problems. In standard mode, everything works: Testcontainers, docker builds, multi-service setups. Through Airlock, some scenarios like Testcontainers port connectivity still need work. The feature I built first is ironically the part holding up the last 10%. copilot_here is growing in ways I didn't expect for a tool I built because I was too paranoid to give GitHub Copilot full shell access. 6 external contributors. 81 stars on GitHub. 24.9k container image downloads in the last 30 days (according to GitHub Packages stats). If you're running GitHub Copilot CLI and want Docker access without the "hope nothing goes wrong" approach, the deep dive on how the broker works is linked in the comments. And if you find it useful, a star on GitHub helps more than you'd think. #Docker #DevOps #OpenSource #GitHubCopilot #Security
To view or add a comment, sign in
-
> GitHub stopped updating its own status page due to terrible availability ... 90.1% uptime - This means ... issues/degradations for 2.5 hours daily ... > GitHub struggles to keep up with the increase in load from AI agents generating more code and pull requests ... Claude Code bot contributions growth in the past 3 months has been enormous ... Stream of outages ... https://lnkd.in/eYHzasTh
To view or add a comment, sign in
-
GitHub just admitted their 10X capacity plan was not enough. They now need 30X. The CTO published an update today that reads like a war report. Two incidents in the last week: a merge queue bug that corrupted branch state across 658 repositories, and a search outage that killed UI functionality for hours. Both trace back to the same root cause. Agentic coding. Since late December 2025, autonomous AI coding agents have been hammering GitHub's infrastructure at a rate nobody planned for. Repository creation, pull requests, API calls, automation, large-repo workloads - all growing exponentially. And here is the part that makes it interesting: a single pull request can touch Git storage, mergeability checks, branch protection, GitHub Actions, search, notifications, permissions, webhooks, APIs, background jobs, caches, and databases. At scale, small inefficiencies compound. Queues deepen. Cache misses become database load. Retries amplify traffic. GitHub Actions is getting hit especially hard. Agentic workflows spawn long-running, parallel CI sessions that dwarf what human developers generate. Copilot code review now consumes GitHub Actions minutes on top of AI credits. The automation layer was not designed for agents running multi-hour autonomous sessions at this volume. The free ride is ending. Starting June 1, Copilot moves to usage-based billing measured in AI credits tied to token consumption. GitHub has already paused new sign-ups for several Copilot tiers. A quick chat question and a multi-hour autonomous coding session used to cost the same amount. That math does not work anymore. Which raises the real question: is the agentic era sustainable on infrastructure built for humans? GitHub is rearchitecting critical systems, isolating services, migrating off legacy frameworks, and pursuing multi-cloud. But the honest read is that the platform is playing catch-up to a usage pattern that showed up faster than anyone modeled. When your 10X plan lasts four months before you need 30X, the planning horizon itself is broken. The agentic era is not a future problem. It is a right-now infrastructure problem. And someone has to pay for it. #GitHub #AgenticAI #DevOps
To view or add a comment, sign in
-
-
GitHub Copilot Pulls Drawstring On Tighter Developer Usage Limits GitHub Copilot is popular. The AI-powered code completion tool (originally developed by GitHub and OpenAI) works to give software application developers a so-called “AI pair programmer” buddy that offers suggested code snippets and (when called upon) entire functions – and it happens directly within an engineer’s Integrated Development Environment (IDE) of choice. All of which means that GitHub Copilot isn’t just popular in terms of total usage; the tool is reporting an increase in patterns of high concurrency (individual developers performing similar operations, but more likely different developers requesting the same types of functions) and intense usage among power-users....
To view or add a comment, sign in
-
GitHub losing Ghostty is not just open source drama. It is a warning shot for developer infrastructure. Mitchell Hashimoto says Ghostty is leaving GitHub after months of reliability issues, including frequent outages affecting PR review, GitHub Actions, and day-to-day maintainer work. He also makes the key point: the problem is not Git itself. It is everything around Git that modern teams depend on now: issues, pull requests, CI, releases, identity, and community workflow. That's the real signal. For years, GitHub has been treated like a default utility. -> Not a vendor. -> Not a dependency. -> Not a platform risk. Just “where the code lives.” But modern software teams do not merely store code on GitHub anymore. They operate through it. When GitHub is down, the repo might still exist, but the factory floor stops: -> PRs wait. -> CI stalls. -> Reviews block. -> Releases slip. -> Maintainers lose momentum. The contrast is interesting. We spent years preaching distributed systems, multi-cloud, backups, failover, and durability. Then we centralized the entire software collaboration layer into one platform and called it convenience. This is not a “leave GitHub” post. GitHub is still one of the most important developer platforms ever built. But Ghostty leaving should make engineering leaders ask a serious question: -> What parts of our software delivery process are only “distributed” in theory?
To view or add a comment, sign in
-
-
Agree with this. Ghostty leaving GitHub isn’t just about outages - it’s a signal about how fragile the entire delivery layer has become. We’ve quietly turned GitHub into more than Git hosting: * PRs * CI/CD * reviews * releases * collaboration itself So when it’s unstable, it’s not “inconvenient” - it’s stoppage. And Mitchell’s point in the announcement is key: it’s not Git that’s the issue, it’s everything built around it now. This isn’t a “leave GitHub” take. It’s a reminder that we’ve centralized a lot more than we admit - and that deserves a second look.
GitHub losing Ghostty is not just open source drama. It is a warning shot for developer infrastructure. Mitchell Hashimoto says Ghostty is leaving GitHub after months of reliability issues, including frequent outages affecting PR review, GitHub Actions, and day-to-day maintainer work. He also makes the key point: the problem is not Git itself. It is everything around Git that modern teams depend on now: issues, pull requests, CI, releases, identity, and community workflow. That's the real signal. For years, GitHub has been treated like a default utility. -> Not a vendor. -> Not a dependency. -> Not a platform risk. Just “where the code lives.” But modern software teams do not merely store code on GitHub anymore. They operate through it. When GitHub is down, the repo might still exist, but the factory floor stops: -> PRs wait. -> CI stalls. -> Reviews block. -> Releases slip. -> Maintainers lose momentum. The contrast is interesting. We spent years preaching distributed systems, multi-cloud, backups, failover, and durability. Then we centralized the entire software collaboration layer into one platform and called it convenience. This is not a “leave GitHub” post. GitHub is still one of the most important developer platforms ever built. But Ghostty leaving should make engineering leaders ask a serious question: -> What parts of our software delivery process are only “distributed” in theory?
To view or add a comment, sign in
-
-
𝗬𝗼𝘂'𝗿𝗲 𝗯𝗮𝗯𝘆𝘀𝗶𝘁𝘁𝗶𝗻𝗴 𝗽𝘂𝗹𝗹 𝗿𝗲𝗾𝘂𝗲𝘀𝘁𝘀. 𝗖𝗜 𝗳𝗮𝗶𝗹𝘀, 𝘆𝗼𝘂 𝗳𝗶𝘅. 𝗥𝗲𝘃𝗶𝗲𝘄𝗲𝗿 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀, 𝘆𝗼𝘂 𝗳𝗶𝘅. 𝗥𝗶𝗻𝘀𝗲, 𝗿𝗲𝗽𝗲𝗮𝘁, 𝗻𝗲𝘃𝗲𝗿 𝘀𝗵𝗶𝗽. Claude Code quietly shipped a feature that collapses that loop. It's called 𝗔𝘂𝘁𝗼-𝗳𝗶𝘅. Once it's on, Claude subscribes to GitHub webhooks for your PR and responds to every CI failure and review comment without you in the room. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺: A reviewer leaves four comments. You context-switch into each one, push a fix, wait on CI, repeat. 𝗧𝗵𝗲 𝗳𝗶𝘅: Claude reads each comment, makes the clear edits, asks about the ambiguous ones, and replies to the thread under your GitHub account (labeled as the agent, so reviewers aren't confused). 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺: A flaky test fails for the fourteenth time this sprint on something unrelated to your change. 𝗧𝗵𝗲 𝗳𝗶𝘅: Auto-fix sees the check failure, investigates, pushes a fix. If the answer isn't obvious, it pauses and pings you instead of guessing. 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺: The reviewer comes online when you're offline. The PR stalls for a day. 𝗧𝗵𝗲 𝗳𝗶𝘅: The cloud session stays alive while you sleep. Every event fires in real time. You wake up to a merged PR, not a twenty-item TODO. One thing to audit before you flip it on. If your repo uses comment-triggered automation — Atlantis, Terraform Cloud, custom GitHub Actions on issue_comment — Claude's replies can trigger them. Fine for staging. Dangerous for production infra. The GitHub App is required. /𝘄𝗲𝗯-𝘀𝗲𝘁𝘂𝗽 alone won't cut it — install the App on every repo where you want Auto-fix active. Install the App. Open your next PR. Click Auto-fix in the CI bar. The loop gets shorter from there. #ClaudeCode #Anthropic #DevOps #AIEngineering #GitHub #AgenticAI #AIAgents
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development