GitHub had a rough week. Three separate events, each significant on its own. Read together, they are harder to dismiss. 1. The outage April 27. GitHub down for roughly 4.5 hours. Search degraded, Actions Jobs delayed on Larger Runners, traced back to an internal Elasticsearch problem. The downtime itself was not the painful part. The ripple effect was. CI/CD pipelines failed to trigger. PR reviews stalled. Issue comments lost. npm installs that pulled from github.com timed out. Production deploys via Actions queued up. A lot of teams realized how many of their workflows were anchored to a single platform. 2. Mitchell Hashimoto pulled Ghostty off GitHub On April 28, Hashimoto (HashiCorp founder, creator of Ghostty) published a post titled "Ghostty Is Leaving GitHub." He is GitHub user 1299. Joined February 2008. Used the platform every day for 18 years. For the past month, he had been keeping a journal, marking every day a GitHub outage blocked his work. Almost every day had an X. In his own words: "I want to ship software and it doesn't want me to ship software." The migration plan had been in the works for months. The April 27 outage was coincidental timing, not the trigger. 3. CVE-2026-3854 A critical RCE affecting GitHub.com and GitHub Enterprise Server. CVSS 8.7. The bug itself looked simple. During git push, push option values were not sanitized before being inserted into internal service headers. The result was command injection. A single push to a single repository let an authenticated attacker execute arbitrary commands on GitHub's backend. Given the multi-tenant architecture, code execution on one node could expose millions of repositories sitting on shared storage. Discovered by Wiz Research on March 4. GitHub.com was patched the same day. GHES required an upgrade to 3.19.3 or later. At the time of public disclosure, 88% of GHES instances were still unpatched. Three different stories. One thing in common. A lot of teams have wired their entire delivery pipeline through a single platform that, for the past few weeks, has been less reliable than the people who depend on it would like. Migration is not always realistic. But the question is worth asking out loud: If GitHub goes down for 4 hours next week, can your team still ship? #GitHub #SoftwareEngineering #DevOps #OpenSource #Cybersecurity
GitHub Outage, Hashimoto Leaves, CVE-2026-3854
More Relevant Posts
-
Mitchell Hashimoto — co-founder of HashiCorp, creator of Vagrant, Terraform, and more recently Ghostty — is moving his terminal emulator project off GitHub. He's been GitHub user number 1299 since February 2008. That's 18 years of daily use. Over half his life, by his own accounting. His post about it is worth reading in full, because it isn't a rant. It's something sadder than that — it reads like a breakup letter to a place he genuinely loved. He kept a journal for the past month, marking an X on every day that a GitHub outage negatively affected his ability to work. Almost every day has an X. On the day he wrote the post, he'd already lost two hours to a GitHub Actions outage and couldn't do any PR review. Not a one-off. Not an edge case. Just Tuesday. The technical issue isn't Git itself — distributed version control doesn't care about GitHub's uptime. The problem is everything built on top of it: issues, pull requests, Actions, the collaboration infrastructure that modern open source actually runs on. When that goes down, work stops. And for Ghostty, it's been going down constantly. What makes this worth paying attention to isn't just that one developer is switching platforms. It's who the developer is, and what it took to get him there. This is someone who described GitHub as the place that made him happiest. Who doom-scrolled GitHub issues on vacation — not as a complaint, but because he enjoyed it. Who started his first major open source project in part hoping it would get him a job there. GitHub was his dream, and he built an 18-year relationship with it. And now he's done. Not because the product changed philosophically, not because of a policy dispute, but because it simply stopped working reliably enough to do serious work on. That's the enshittification story in miniature. You don't have to make a product evil to ruin it. You just have to stop maintaining it well enough to keep the people who love it most. The outages accumulate, the trust erodes, and eventually someone who opened GitHub every single day for 18 years wakes up and does the math. Ghostty will keep a read-only mirror at the current GitHub URL. Mitchell's personal projects stay for now. But the project — the active, living thing — is leaving. There's a version of this where GitHub notices and fixes it. He says he'd come back, but only on results, not promises. That's a reasonable bar. We'll see if anyone clears it.
To view or add a comment, sign in
-
GitHub just had one of the worst weeks in its history. And as engineers, we need to talk about it. Here's what happened 👇 🔴 Incident #1 - The Silent Code Killer On April 23, GitHub's merge queue silently reverted previously merged code across 658 repos and 2,092 PRs - during a 4-hour window. The scariest part? Their automated monitoring caught nothing. They found out via support tickets, 3.5 hours later. The root cause? A change to an unreleased feature that was supposed to be behind a feature flag - but wasn't. The broken code shipped to everyone. 🔴 Incident #2 - The Botnet Blackout On April 27, a suspected botnet overwhelmed GitHub's Elasticsearch cluster. PR lists, issue lists, project views - all blank. For 4+ hours. Data was fine. You just couldn't see any of it. 🔴 Incident #3 - The Uptime Nobody Talks About A developer built an unofficial GitHub status tracker that actually counts degraded performance as downtime (wild concept, right?). Current uptime: 85.51% Industry standard: 99.9% GitHub's official page classifies broken search, PRs not loading, and slowdowns as "Degraded Performance" - technically up, practically unusable. The CTO has now issued a public apology. The reason? Agentic AI workflows pushed GitHub way past its designed limits. They planned for 10x capacity growth. By February, they realized they needed 30x. Three lessons every engineering team should take from this: 1️⃣ Feature flags only work if they're actually enforced - at the infrastructure level, not just in code review. 2️⃣ Monitor for correctness, not just availability. A system can be "up" and completely broken. 3️⃣ How you report incidents is a trust signal. GitHub is now rolling out a 3-tier status system (Degraded / Partial / Major outage) with per-service uptime. That's the right move - just years late. AI-driven workloads are scaling faster than anyone predicted. If it caught GitHub off guard, ask yourself: Is your infrastructure ready? ♻️ Repost if your team uses GitHub. They need to see this. #GitHub #SoftwareEngineering #DevOps #Engineering #IncidentResponse #FeatureFlags #WebDevelopment
To view or add a comment, sign in
-
GitHub silently deleted your merged code. And you'd never know. No error. No conflict. No warning. Just a clean merge that quietly rewrote your main branch. Here's what happened on April 23rd A bug in GitHub's merge queue caused PRs to build on the wrong base commit. You reviewed: +29 lines added, -34 removed What landed on main: +245 added, -1,137 removed Thousands of lines of shipped code. Gone. CI passed. Branch protection ran. PR showed "Merged." Everything looked fine. 2,092 PRs. 658 repos. 4.5 hours. No public outage banner. Ever. The recovery? Manual. Comb through commit graphs. Reconstruct history by hand. Re-merge closed PRs. Some teams had dozens of corrupted commits before anyone noticed. This wasn't an outage. It was an integrity failure. And it exposes something bigger 👇 We've delegated trust to automation without verifying the contract it's keeping. A merge queue has one job: The commit CI tested = the commit that lands. When that breaks silently, everything downstream is suspect. Builds. Deployments. Compliance audits. All of it. GitHub is also dealing with a capacity crisis they planned for 10x growth, realized they need 30x, and have had no CEO since mid-2025. The cracks are showing. Trust in tooling is built over years. It can crack in an afternoon. #GitHub #SoftwareEngineering #DevOps #EngineeringLeadership
To view or add a comment, sign in
-
-
I got a message on Friday night at 8:55 PM. "Bharath, I need you to revoke this key immediately." My stomach dropped. I had pushed a test script to GitHub with a PAT hardcoded in it. No expiry date. Sitting there in a public commit for anyone to find. The worst part? I was in hospital that day. Tried fixing it from my phone and kept getting a 404 error because my access had already been revoked by then. I messaged my manager and colleague in a panic, apologising repeatedly. You know that feeling when you've made a mistake and you just keep saying sorry because you don't know what else to do 😅 My manager Vallal Peruman sorted it out while I was in the hospital. Revoked the key, fixed the file, cleaned it up. When I finally spoke to my collegue Kevin B. he said something that stuck with me. "No need to apologise. Just be careful next time." That was it. No lecture. No drama. Just fix it and move on. 🙂 Here's what I changed after this. No more PATs with no expiry. Every token I create now has a 90 day maximum. If it gets exposed and I miss it, at least it has a death date. Installed git-secrets locally. It scans staged files before every commit and blocks the push if it finds anything that looks like a credential. 5 minutes to set up, saves you from exactly this situation. git secrets --install git secrets --register-aws Also went through all my active GitHub tokens after this. Had a one sitting there with no expiry that I hadn't used in months. It's deleted. The mistake wasn't forgetting to remove the key. The mistake was creating a token with no expiry in the first place. One bad habit that turns a small accident into a serious risk. Grateful for a manager and teammate who handled it the right way. Made it easier to own the mistake and actually learn from it instead of just feeling bad about it. If you haven't checked your active PATs recently, do it today. github.com > Settings > Developer settings > Personal access tokens Anything with no expiry or unused for 30 days is a risk sitting there. Have you made a similar mistake? What did you change after? 👇 #GitHub #DevOps #CloudSecurity #PlatformEngineering #LearningInPublic
To view or add a comment, sign in
-
-
How to #Migrate Your Open-Source Project Away from #GitHub: A practical guide to migrating open-source projects away from GitHub — covering git history, issues, CI pipelines, and how to keep contributors along the way. https://lnkd.in/dP_fpYZf
To view or add a comment, sign in
-
Back in November I looked at a problem and thought "that's going to be fun to solve." GitHub Copilot CLI running inside a Docker sandbox needs Docker access. Testcontainers, integration tests, build pipelines. They all need a working Docker socket. The obvious answer? Mount /var/run/docker.sock into the container. The obvious answer is also terrifying. That socket is root access to your host machine. Any image, privileged containers, host filesystem mounts. For a human dev, you trust yourself. For Copilot running autonomously... not so much. Last year I built an Airlock feature that hardens network traffic, routing everything through an allowlist-enforcing proxy. That was step one. The Docker socket broker was the piece I kept putting off because the problem was harder. The broker sits between the container and the real Docker daemon. Every API call goes through it. 65 endpoints explicitly allowed, everything else blocked. When Copilot tries to create a container, the broker inspects the body: checks the image against an allowlist (empty by default, you name what you trust), blocks privileged mode, blocks host namespace sharing, blocks mounts to /etc, /root, /var, and the socket itself. Combine it with the Airlock I built last year and sibling containers spawned by Copilot get auto-joined to the isolated network too. Network-level and API-level lockdown at the same time. It wasn't one of those "throw a single prompt at it and it's solved" problems. In standard mode, everything works: Testcontainers, docker builds, multi-service setups. Through Airlock, some scenarios like Testcontainers port connectivity still need work. The feature I built first is ironically the part holding up the last 10%. copilot_here is growing in ways I didn't expect for a tool I built because I was too paranoid to give GitHub Copilot full shell access. 6 external contributors. 81 stars on GitHub. 24.9k container image downloads in the last 30 days (according to GitHub Packages stats). If you're running GitHub Copilot CLI and want Docker access without the "hope nothing goes wrong" approach, the deep dive on how the broker works is linked in the comments. And if you find it useful, a star on GitHub helps more than you'd think. #Docker #DevOps #OpenSource #GitHubCopilot #Security
To view or add a comment, sign in
-
My API key was in the GitHub commit. I had 20 minutes before going on stage. Two years ago, I was at PyData. Sitting in the lobby. Laptop on my knees, making last-minute changes to my code. My heart was already racing. Around 80 people in the room next door. It had been 8 to 10 years since I'd last spoken on a stage like that. I made one final commit. Pushed it. Closed my laptop. Then opened it again because something didn't feel right. My API key was sitting in the commit, in plain text, on a public repo. I opened ChatGPT, because I had no time to experiment. I needed someone to tell me exactly what to do, right then. So here's what I learnt, because I wish someone had told me first. 1) Rotate the key. Not later, not after you clean the repo. Right now. Bots scrape GitHub for leaked credentials within minutes, so the second something hits a public repo you treat it as compromised. 2) Regenerate the key, kill the old one. 3) Rewrite the git history. Deleting the branch does almost nothing — the commit still lives in the history, anyone who cloned the repo already has it, and GitHub keeps cached versions too. Use git filter-repo or BFG Repo-Cleaner to remove the file from every commit. Force push. Tell anyone with a clone to re-clone. 4) Check what got used. CloudTrail for AWS keys. The Stripe dashboard for Stripe keys. Access logs if you have them. Don't just clean and move on. But there's a bigger thing I want to say. "Sensitive" is a much bigger word than people use it for. We talk about it like it only means API keys. But it's also the CSV with customer details. The config file with a database connection string. The internal doc with confidential info. The model file you're not licensed to share. The Slack export you saved as JSON. All of it lives in the same place once you push it. For data, the fix is harder, because you can't rotate someone's email address. Your job is figuring out what was exposed, who it belongs to, and who needs to be told. Private repos aren't safe either. Anyone in the org can see what's in it, one compromised account hands an attacker everything, and sometimes private repos get flipped to public by accident. Uber lost the data of 57 million users from a private GitHub repo with AWS keys in it. Private isn't a safety net. It's a slightly smaller blast radius. I think about that PyData moment a lot, especially now that everyone is shipping faster with AI. We're pushing more code, more often, with less double-checking, because the model wrote it and it looked fine. So I built a Claude Code skill that does. Before any push, it scans the diff for anything sensitive — API keys, tokens, credentials, CSVs, config files. If it finds something, it stops me and asks. Every time. Non-negotiable. I rotated the keys from the lobby that day, two years ago. Walked on stage and pretended everything was fine. The talk went well. Nobody knew. But I did. #Day4 #30DaysChallenge
To view or add a comment, sign in
-
-
GitHub user #1299 just walked away. After 18 years. Mitchell Hashimoto — the man behind Terraform, Vagrant, and Ghostty — opened GitHub every single day for over half his life.During painful breakups. At 4 AM in college. Even on his honeymoon, while his wife slept beside him. Last week, he wrote a goodbye letter. “I want to code. And I can’t code with GitHub anymore.” For one month, he kept a journal. An “X” for every day a GitHub outage broke his work. Almost every day had an X. So what happened to the world’s most reliable code platform? AI agents happened. The numbers are staggering: → AI-generated pull requests jumped from 4 million to 17 million in just 6 months → Claude Code alone went from 100K weekly commits to 2.6M — a 25x leap → GitHub Actions usage: 500M minutes/week in 2023 → 2.1 BILLION today → The platform now processes 275 million commits every single week GitHub’s CTO publicly admitted it last week: They planned a 10x capacity expansion in October 2025. By February 2026, they realized they needed 30x. Tools like Claude Code, Codex, Cursor, and Copilot Agent don’t sleep. They don’t review their own work. And they don’t pay per usage like humans do. One engineer estimated only 1 in 10 AI-generated PRs is actually legitimate. The other 9? Pure noise — flooding maintainers, draining infrastructure, breaking the system. The result? → April 23: a merge queue bug silently reverted commits across 658 repositories → April 27: an 18-hour global outage took GitHub Search down worldwide → Real uptime: 90.21% — far below GitHub’s promised 99.9% SLA Zig already left. BookStack just finished migrating. Now Ghostty is going. Maybe this is the moment we stop asking “how fast can AI write code?” and start asking “who pays the cost when it does?” Is this a temporary scaling problem — or the start of a great migration? Don’t take my word for it. Read the sources: → Mitchell’s letter: https://lnkd.in/d8hcU6eu → GitHub’s official apology: github.blog (“An update on GitHub availability”) → Live incidents: githubstatus.com #AI #GitHub #SoftwareEngineering #DeveloperTools #OpenSource #AIagents
To view or add a comment, sign in
-
-
GitHub breaks almost every day. Or so it seems. Mitchell Hashimoto, the author of Ghostty and one of the most respected open source developers out there, kept a personal journal for a month, marking every day GitHub blocked his work. The result: more or less 90% actual uptime, against the 99.9% stated in the SLA. He then wrote a post that went viral, to the point where GitHub's COO publicly apologized. And it doesn't look like an isolated case: Zig has migrated to Codeberg, GitHub's own company blog admits they're not meeting their SLAs, and GitHub Actions is increasingly unstable. The root cause seems structural: Microsoft is shifting resources and attention toward AI (CoreAI division), and GitHub is paying the price of that prioritization. Maybe it's time to seriously evaluate alternatives? https://lnkd.in/e7JNC8yc #GitHub #OpenSource #DevOps #Reliability #SoftwareEngineering
To view or add a comment, sign in
-
GitHub used to be a decent signal to check if someone was really passionate about code. Not perfect. But decent. Today, because GitHub activity is visible and often judged, it is easier to optimize for appearance instead of substance. Then I saw a tweet using GitHub activity as a hiring metric. And honestly, I thought it was absurd. We are always looking for signals that help us find candidates who genuinely care about the craft. The one metric that will magically put the spotlight on the right people. But let’s assume we only look at commit count. That metric can be faked. You can run a script that creates commits with different dates, then push everything. Congratulations. You can now “commit” on January 3rd even if you did not write a single line of code that day. That does not mean GitHub is useless. It just means GitHub activity alone is a weak signal. It becomes more useful when you use it as context. Look at the public repositories. What was built? How was it built? Is there real thinking behind the code? Can the candidate explain the tradeoffs? Even then, it is still not enough. Public repos can be copied. Commit graphs can be inflated. Projects can look more impressive than they really are. This is why metrics alone are weak. Context makes them useful. A conversation around real work will tell you more than a contribution graph ever will. Ask questions. Go deeper. Challenge their reasoning. Look at their work, then ask them to explain it. That is where you see if someone actually cares about the craft, or if they just know how to look good on paper. The best way to find someone passionate about code is not by staring at a green graph. It is by talking to them and understanding how they think.
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development