GitHub broke customers code bases with a change it a) didn't spot itself and b) which was not meant to hit live systems. Its COO is playing down the scale of impact, says testing failed to catch the "edge case" that impacted 2,000+. 👉 https://lnkd.in/e36VFXpV #github #qualityassurance #qa #featureflag #whoops
GitHub Code Breaks 2000+ Customer Code Bases
More Relevant Posts
-
GitHub silently deleted your merged code. And you'd never know. No error. No conflict. No warning. Just a clean merge that quietly rewrote your main branch. Here's what happened on April 23rd A bug in GitHub's merge queue caused PRs to build on the wrong base commit. You reviewed: +29 lines added, -34 removed What landed on main: +245 added, -1,137 removed Thousands of lines of shipped code. Gone. CI passed. Branch protection ran. PR showed "Merged." Everything looked fine. 2,092 PRs. 658 repos. 4.5 hours. No public outage banner. Ever. The recovery? Manual. Comb through commit graphs. Reconstruct history by hand. Re-merge closed PRs. Some teams had dozens of corrupted commits before anyone noticed. This wasn't an outage. It was an integrity failure. And it exposes something bigger 👇 We've delegated trust to automation without verifying the contract it's keeping. A merge queue has one job: The commit CI tested = the commit that lands. When that breaks silently, everything downstream is suspect. Builds. Deployments. Compliance audits. All of it. GitHub is also dealing with a capacity crisis they planned for 10x growth, realized they need 30x, and have had no CEO since mid-2025. The cracks are showing. Trust in tooling is built over years. It can crack in an afternoon. #GitHub #SoftwareEngineering #DevOps #EngineeringLeadership
To view or add a comment, sign in
-
-
🚨 Issues with #GitHub today? We’re seeing instability across the platform: ❌ Push & pull delays ❌ Pull Requests not loading ❌ Actions (CI/CD) failing or stuck ❌ Overall slow performance This is not a local issue — it’s affecting multiple environments. 💡 What I did (and what I recommend): I moved to running my own Git server using Gitea Open Source — and honestly, this is something more teams should consider. https://git.xdeye.com/ 👉 Here’s the practical advice: ✔️ Keep a self-hosted Git backup (Gitea / GitLab / bare repo). ✔️ Push your code to multiple remotes (GitHub + your own server). ✔️ Don’t depend fully on GitHub Actions — have manual or server-based deployment ready. ✔️ Keep production deployment independent from third-party outages. ✔️ Automate locally or on your own server where possible. Now my workflow is: Local → self-hosted Git → live servers GitHub is secondary, not critical ⚠️ With the growing use of AI tools and third-party automation inside CI/CD pipelines, complexity and risk are increasing. When one piece fails, everything can break. Better to stay in control. How are you handling redundancy in your Git workflow? #GitHub #DevOps #SelfHosted #Gitea #CI #CD #Security #ITInfrastructure
To view or add a comment, sign in
-
-
Last week, someone wiped our entire codebase. The whole Bitbucket repository — replaced with a single commit: "Repository cleared." Every branch. Every commit. Every line of history. Gone. And at the same time? For about 30 minutes, I just stared at the screen. Then I got to work. Step 1: I found an old commit hash that was still cached on Bitbucket's servers. Step 2: git fetch origin [that hash] — 2,059 objects came back. Step 3: Force-pushed the recovered code to a new repo. Full codebase? Recovered. We went from "everything is gone" to "everything is back" in the same day. But here's the lesson that actually matters: After the recovery, I sent the team a list of 5 changes we need to make: Branch protection rules — no one pushes directly to main. Pull request reviews before any merge. Minimum 2 admins on every platform . Regular backups — not "we should do this someday," but scheduled. Access review across all platforms. Recovery is great. But prevention is the actual job. The scariest moment wasn't discovering the code was gone. It was realizing we had no safeguards to stop it from happening in the first place. #DevOps #Git #Bitbucket #IncidentResponse #CodeRecovery #SecurityByDesign #IntegrationEngineering #TechLeadership
To view or add a comment, sign in
-
GitHub's merge queue silently rewrote main branch history on April 23rd. The pattern: PR shows a +29 / -34 diff. Reviewed, approved, queued. What lands is +245 / -1,137 — thousands of lines of already-shipped code quietly removed. Every merge after that stacks on the broken history. UI shows nothing wrong. GitHub says 2,800 PRs out of 4 million. One company reported 200+ on its own. Pick a number. The part nobody's saying out loud: for history to get overwritten like this, something is force-pushing to main behind the scenes. Branch protection apparently doesn't apply to GitHub itself. Worth thinking about what else moves through that path silently. The deeper issue isn't the bug. Bugs happen. The issue is that "distributed version control" became a single vendor's merge button for most of the industry, and the merge button lied for a day. Git itself was fine the whole time. It always is. I run my own Gitea. Recommend it. #GitHub #Git #DevOps #Gitea #SelfHosted #SoftwareEngineering
To view or add a comment, sign in
-
GitHub's 2026 Security Roadmap for Actions is setting a new standard for secure CI/CD practices. With a focus on secure defaults and enhanced observability, GitHub is paving the way for more robust and secure software development workflows. The roadmap emphasizes the integration of security features directly into the CI/CD pipelines, which is crucial for maintaining the integrity of software delivery processes. By adopting these new security measures, developers can ensure that their workflows are not only efficient but also secure by design. For those managing CI/CD pipelines, this roadmap is a call to action to evaluate and integrate these upcoming features. Staying ahead in security means adapting to these changes early and ensuring your development processes are aligned with the best practices outlined by GitHub. For more details on GitHub's security roadmap, you can view the full announcement on their official blog. #GitHubActions #Security #CICD #DevOps #SoftwareDevelopment
To view or add a comment, sign in
-
I deleted the wrong file before my coffee kicked in. Again. ☕ Happy Git Friday! I can't even count the number of times this has happened. Early morning. Terminal open. Brain not fully online yet. I'm cleaning up a working directory, trimming files, moving things around. And then I realize I just deleted something I needed. Not git rm. Just rm. Gone from the filesystem. No recycle bin. No undo. Just a blank stare at the terminal and the slow realization that the file I need for today's work is gone. The first time it happened, I panicked. I started thinking about backup systems, recovery tools, whether I could rewrite it from memory. Then someone said: "It's in a git repo. Just check it out." git checkout -- filename The file came back. Exactly as it was the last time I staged it. Git had a copy in the index the entire time. I just didn't know how to ask for it. That one command has saved me more times than I'm willing to admit. All of them before the first cup of coffee. The Danger Zone (When Deletions Feel Permanent): 🔹 rm deletes from the filesystem. Git doesn't know or care. But if the file was tracked and staged, the last staged version is still in the index. You can recover it. 🔹 git add on a deleted file stages the deletion. If you accidentally delete a file and then run git add ., you've told git you meant to delete it. Now recovery requires checking out from a commit, not just the index. 🔹 git rm is intentional. It removes the file AND stages the removal. That's a deliberate action. An accidental rm followed by git checkout -- is the recovery path for unintentional deletions. ❓ Question of the Day: You deleted a file in the working directory and want to recover it from the index. Which command do you use? Ⓐ git checkout -- filename Ⓑ git add filename Ⓒ git rm filename Ⓓ git recover filename 👇 Answer and breakdown in the comments! #Git #GitOps #DevOps #DamnitRay #QOTD
To view or add a comment, sign in
-
-
𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗚𝗶𝘁𝗛𝘂𝗯 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝘀 𝘀𝗵𝗼𝘂𝗹𝗱𝗻’𝘁 𝗯𝗲 𝘁𝗵𝗶𝘀 𝗽𝗮𝗶𝗻𝗳𝘂𝗹… 𝗯𝘂𝘁 𝗶𝘁 𝗶𝘀. Over the past few months, I kept running into a frustrating issue: using work + personal GitHub accounts on the same machine without breaking SSH or mixing identities. So I built a clean, repeatable SSH setup that solves the following: • Authentication conflicts • Wrong-account commits • Broken push/pull workflows What’s inside the guide: • Separate SSH keys per account • Smart aliasing via ~/.ssh/config • Per-repo Git identity setup • Quick debugging checks The goal was simple: 👉 Make it predictable and production-safe—not just “works on my machine." If you’ve ever pushed code from the wrong account… you know the pain. 😅 🔗 GitHub repo: https://lnkd.in/dFH75WvV If this helps, consider giving the repo a ⭐ #github #git #ssh #developers #webdev #softwareengineering #opensource
To view or add a comment, sign in
-
-
Even the giants have "off" days: Lessons from GitHub’s Merge Queue regression. GitHub recently confirmed a bug where roughly 2,800 pull requests were merged from the wrong base state, unintentionally reverting previous changes. While 0.07% sounds small, in production, "small" percentages can mean major downtime. Key Takeaways for Teams: 1)Automated Testing is King: GitHub is already expanding test coverage for merge operations. 2)Trust, but Verify: Always keep an eye on your branch history after a merge, especially when using automated queues. 3)Transparency Wins: Kudos to Kyle Daigle and the GitHub team for the quick RCA (Root Cause Analysis) and direct outreach to affected users. Have you ever encountered a "silent revert" in your workflow? How does your team guard against tool-level regressions? #GitHub #DevOps #SoftwareEngineering #CI/CD #TechNews
To view or add a comment, sign in
-
-
𝗬𝗼𝘂 𝗵𝗶𝗿𝗲𝗱 𝗮 𝗳𝘂𝗹𝗹-𝘁𝗶𝗺𝗲 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗴𝘂𝗮𝗿𝗱 𝘁𝗼 𝗰𝗵𝗲𝗰𝗸 𝗶𝗳 𝘆𝗼𝘂𝗿 𝗳𝗿𝗼𝗻𝘁 𝗱𝗼𝗼𝗿 𝗶𝘀 𝗹𝗼𝗰𝗸𝗲𝗱. Every. Single. Night. That's Jenkins. 🏠 𝗧𝗛𝗘 𝗦𝗧𝗢𝗥𝗬 Jenkins is like owning a house with a dedicated security system — powerful, fully customizable, but 𝘆𝗼𝘂 maintain everything. The server. The plugins. The updates. The "why did it break at 2am" investigations. GitHub Actions is like moving into a modern apartment building where security is just... included. No basement server to babysit. Your workflow file lives right next to your code. Push a commit, the pipeline wakes up. Done. ⚙️ 𝗧𝗛𝗘 𝗟𝗘𝗦𝗦𝗢𝗡 Here's what actually changes day-to-day: Jenkins requires you to provision and maintain a build server, manage plugin compatibility (which breaks more than you'd expect), and context-switch between your repo and a separate UI. GitHub Actions gives you ephemeral runners — fresh environments spun up per job, then discarded. Your CI/CD config is a `.yml` file in the repo itself, versioned alongside the code it builds. Zero infrastructure to own unless you 𝘄𝗮𝗻𝘁 self-hosted runners for specific needs. The operational overhead difference is real. A small team on Jenkins often has one person who "knows how it works." That's a risk, not a feature. GitHub Actions isn't perfect — complex matrix builds and cost at scale are genuine pain points. But for most teams shipping software today, the default choice should be Actions, with Jenkins reserved for environments that 𝗿𝗲𝗾𝘂𝗶𝗿𝗲 deep customization or already have mature Jenkins infrastructure. 💬 𝗬𝗢𝗨𝗥 𝗧𝗨𝗥𝗡 Have you migrated from Jenkins to GitHub Actions — or gone the other direction? What was the deciding factor? 𝗪𝗮𝗻𝘁 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 𝗯𝗲𝗵𝗶𝗻𝗱 𝘁𝗵𝗶𝘀? JetBrains' 2026 report shows GitHub Actions leading CI/CD adoption at 33% in organizations, while EITT's deep-dive breaks down the full TCO — and maintenance cost alone makes Jenkins 60x more expensive for smaller teams. For SaaS teams specifically, Impressico found GitHub Actions cuts setup time by over 40% compared to Jenkins — dropping from days to hours. Links in the comments 👇 #DevOps #GitHubActions #Jenkins #SoftwareEngineering #Automation #CICD #TechCommunity #CloudComputing #Innovation #TechTrends TrainWithShubham
To view or add a comment, sign in
-
-
GitHub’s recent incident is every engineer’s nightmare: ✅ CI passed ✅ PR approved ✅ Merge successful …and the wrong code still landed in main. On April 23, GitHub confirmed that a merge queue issue affected 2,092 pull requests across 658 repositories, producing incorrect commits and silently reverting code in some cases. Good reminder that “all checks passed” doesn’t always mean “everything is correct”. #GitHub #SoftwareEngineering #DevOps #CodingLife
To view or add a comment, sign in
-
More from this author
-
Of AWS's landmark S3 Files, Anthropic's "Mythic" frontier model, Jamie Dimon on platforms, "Pyongyang vs Nebraska" and more.
The Stack 3w -
UK drives hyperscale egress shifts; the Axios incident unpacked; & more!
The Stack 1mo -
Drones scorch AWS data centre as Middle East conflagration deepens
The Stack 2mo
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development