Last week, someone wiped our entire codebase. The whole Bitbucket repository — replaced with a single commit: "Repository cleared." Every branch. Every commit. Every line of history. Gone. And at the same time? For about 30 minutes, I just stared at the screen. Then I got to work. Step 1: I found an old commit hash that was still cached on Bitbucket's servers. Step 2: git fetch origin [that hash] — 2,059 objects came back. Step 3: Force-pushed the recovered code to a new repo. Full codebase? Recovered. We went from "everything is gone" to "everything is back" in the same day. But here's the lesson that actually matters: After the recovery, I sent the team a list of 5 changes we need to make: Branch protection rules — no one pushes directly to main. Pull request reviews before any merge. Minimum 2 admins on every platform . Regular backups — not "we should do this someday," but scheduled. Access review across all platforms. Recovery is great. But prevention is the actual job. The scariest moment wasn't discovering the code was gone. It was realizing we had no safeguards to stop it from happening in the first place. #DevOps #Git #Bitbucket #IncidentResponse #CodeRecovery #SecurityByDesign #IntegrationEngineering #TechLeadership
Codebase Recovery and Lessons Learned on Bitbucket
More Relevant Posts
-
🚨 Issues with #GitHub today? We’re seeing instability across the platform: ❌ Push & pull delays ❌ Pull Requests not loading ❌ Actions (CI/CD) failing or stuck ❌ Overall slow performance This is not a local issue — it’s affecting multiple environments. 💡 What I did (and what I recommend): I moved to running my own Git server using Gitea Open Source — and honestly, this is something more teams should consider. https://git.xdeye.com/ 👉 Here’s the practical advice: ✔️ Keep a self-hosted Git backup (Gitea / GitLab / bare repo). ✔️ Push your code to multiple remotes (GitHub + your own server). ✔️ Don’t depend fully on GitHub Actions — have manual or server-based deployment ready. ✔️ Keep production deployment independent from third-party outages. ✔️ Automate locally or on your own server where possible. Now my workflow is: Local → self-hosted Git → live servers GitHub is secondary, not critical ⚠️ With the growing use of AI tools and third-party automation inside CI/CD pipelines, complexity and risk are increasing. When one piece fails, everything can break. Better to stay in control. How are you handling redundancy in your Git workflow? #GitHub #DevOps #SelfHosted #Gitea #CI #CD #Security #ITInfrastructure
To view or add a comment, sign in
-
-
Easter weekend is when "please don't merge anything" gets tested. Someone ships a "small fix" Friday afternoon. The on-call phone rings Saturday morning. Half the team is debugging instead of doing whatever they planned. The manual freeze: send a Slack message, hope everyone sees it, remind the one person who didn't, and still watch a PR slip through because branch protection doesn't care about your message. NoShip turns "please don't merge" into an actual rule. Set a recurring freeze for holiday weekends and GitHub enforces it. PRs block automatically. No honor system required. If something genuinely needs to go out anyway, the dual-approval override workflow means two people sign off before a single PR gets through. Full audit trail captures who approved it and why. Enjoy the long weekend. #DevOps #GitHub #CodeFreeze #SRE #EngineeringLeadership #Easter #DeploymentSafety
To view or add a comment, sign in
-
𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗚𝗶𝘁𝗛𝘂𝗯 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝘀 𝘀𝗵𝗼𝘂𝗹𝗱𝗻’𝘁 𝗯𝗲 𝘁𝗵𝗶𝘀 𝗽𝗮𝗶𝗻𝗳𝘂𝗹… 𝗯𝘂𝘁 𝗶𝘁 𝗶𝘀. Over the past few months, I kept running into a frustrating issue: using work + personal GitHub accounts on the same machine without breaking SSH or mixing identities. So I built a clean, repeatable SSH setup that solves the following: • Authentication conflicts • Wrong-account commits • Broken push/pull workflows What’s inside the guide: • Separate SSH keys per account • Smart aliasing via ~/.ssh/config • Per-repo Git identity setup • Quick debugging checks The goal was simple: 👉 Make it predictable and production-safe—not just “works on my machine." If you’ve ever pushed code from the wrong account… you know the pain. 😅 🔗 GitHub repo: https://lnkd.in/dFH75WvV If this helps, consider giving the repo a ⭐ #github #git #ssh #developers #webdev #softwareengineering #opensource
To view or add a comment, sign in
-
-
Critical Infra like Github having this kind of incidents would be unthinkable 3-4 yrs ago, Also their uptimes lately have not been that great. What changed? I also sympathise with them due to huge amount of code generated by Agents & AI being pushed on a scale couldn't be imagined before but even that shouldn't be an excuse for this kind of errors. Git & Github are too critical for these kind of goof ups.
AI agents + Enterprise architecture | Polyglot | Founder | APIs | Dotnet | Node | Python | RAG | Gen AI
GitHub had an incident where merge queue commits were reverting previously merged commits at random. Crazy. Count yourself lucky if you didn’t get this email.
To view or add a comment, sign in
-
-
How many times have you pushed a workflow file only to have the CI fail because of a typo, a missing job reference, or a circular dependency? PipeChecker catches those problems before they reach your repo. 🔍 What it checks: ✅ Circular job dependencies (via Tarjan's SCC algorithm) ✅ Missing needs / depends_on references ✅ Hardcoded secrets & undeclared env vars ✅ Unpinned GitHub Actions & Docker :latest tags ✅ Empty or malformed pipelines 🛠 Built with Rust for speed, with support for GitHub Actions, GitLab CI, and CircleCI. 📦 Available on crates.io, npm, and as standalone binaries (Linux, macOS, Windows). 🔗 GitHub: https://lnkd.in/gXu_-a7e Would love feedback from the DevOps / platform engineering community. What's the worst CI/CD config mistake you've shipped to prod? 😅o #Rust #DevOps #CICD #GitHubActions #PlatformEngineering #CLI #OpenSource
To view or add a comment, sign in
-
It’s 2 AM. Your host fails. You realize your last Portainer backup is from three months ago and sitting in your "Downloads" folder. 😱 We’ve all been there. The "manual click" is the enemy of reliability. I decided to treat my HomeLab like a production environment by automating my 𝗣𝗼𝗿𝘁𝗮𝗶𝗻𝗲𝗿 𝗕𝗮𝗰𝗸𝘂𝗽 & 𝗥𝗲𝗰𝗼𝘃𝗲𝗿𝘆 with Ansible. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: Manual exports are inconsistent. You forget. You miss a stack. You lose the docker-compose logic that took hours to fine-tune. 𝗧𝗵𝗲 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻: ✅ 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗲𝗱 𝗦𝘆𝗻𝗰: My Ansible Playbook pulls every stack configuration and Docker volume automatically. ✅ 𝗦𝗺𝗮𝗿𝘁 𝗖𝗼𝗺𝗺𝗶𝘁𝘀: Every backup creates a commit in my private Git repo. The best part? The entire session log is saved in the commit description (see the screenshots!). ✅ 𝗧𝗼𝘁𝗮𝗹 𝗔𝘄𝗮𝗿𝗲𝗻𝗲𝘀𝘀: If a backup fails, I get a ping on Mattermost before I even finish my coffee. ☕ Now, if a host goes down, I don't panic. I just run the restore.sh and walk away. That’s the power of IaC. 𝗗𝗼𝗲𝘀 𝘆𝗼𝘂𝗿 𝗯𝗮𝗰𝗸𝘂𝗽 𝗽𝗹𝗮𝗻 𝗽𝗮𝘀𝘀 𝘁𝗵𝗲 "𝗦𝗹𝗲𝗲𝗽 𝗪𝗲𝗹𝗹 𝗮𝘁 𝗡𝗶𝗴𝗵𝘁" 𝘁𝗲𝘀𝘁? 𝗟𝗲𝘁’𝘀 𝘁𝗮𝗹𝗸 𝗮𝗯𝗼𝘂𝘁 𝗗𝗼𝗰𝗸𝗲𝗿 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘆 𝘀𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗲𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀! #HomeLab #SysAdmin #Ansible #Docker #TechCommunity #GitOps
To view or add a comment, sign in
-
-
GitHub broke customers code bases with a change it a) didn't spot itself and b) which was not meant to hit live systems. Its COO is playing down the scale of impact, says testing failed to catch the "edge case" that impacted 2,000+. 👉 https://lnkd.in/e36VFXpV #github #qualityassurance #qa #featureflag #whoops
To view or add a comment, sign in
-
GitHub Branch Protection: Advanced Rules for Status Check Dependencies Master advanced branch protection configurations that go beyond basic reviews. Learn to set up dependent status checks, automatic review dismissal, and linear history enforcement for enterprise-grade code quality control. Read the full how-to guide: https://lnkd.in/gB3fSePc #ITTips #Productivity #DevOps #GitHub #TechTips #OpenSource #SoftwareDevelopment #BranchProtection #CodeQuality #GitWorkflow
To view or add a comment, sign in
-
GitHub's merge queue silently rewrote main branch history on April 23rd. The pattern: PR shows a +29 / -34 diff. Reviewed, approved, queued. What lands is +245 / -1,137 — thousands of lines of already-shipped code quietly removed. Every merge after that stacks on the broken history. UI shows nothing wrong. GitHub says 2,800 PRs out of 4 million. One company reported 200+ on its own. Pick a number. The part nobody's saying out loud: for history to get overwritten like this, something is force-pushing to main behind the scenes. Branch protection apparently doesn't apply to GitHub itself. Worth thinking about what else moves through that path silently. The deeper issue isn't the bug. Bugs happen. The issue is that "distributed version control" became a single vendor's merge button for most of the industry, and the merge button lied for a day. Git itself was fine the whole time. It always is. I run my own Gitea. Recommend it. #GitHub #Git #DevOps #Gitea #SelfHosted #SoftwareEngineering
To view or add a comment, sign in
-
Sometimes the most frustrating CI/CD pipeline failures come down to the simplest directory contexts. I was recently migrating a massive batch of microservices from Jenkins to GitHub Actions. Everything looked perfect in the YAML, but the deployment step kept failing or timing out. The issue? The Docker container was building successfully, but it was missing critical environment variables. In our legacy Jenkins setup, the bash script explicitly navigated into the service directory (cd my-service) before copying the .env.prod file and running the Docker build. When translating that to GitHub Actions, that subtle path logic got lost. The runner was executing commands in the root directory, silently failing to copy the .env file into the build context. The fix was incredibly simple but easy to miss: explicitly setting the 𝘄𝗼𝗿𝗸𝗶𝗻𝗴-𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆 𝗸𝗲𝘆 𝗮𝘁 𝘁𝗵𝗲 𝘀𝘁𝗲𝗽 𝗹𝗲𝘃𝗲𝗹 𝘁𝗼 𝗲𝗻𝘀𝘂𝗿𝗲 𝘁𝗵𝗲 𝗰𝗽 𝗰𝗼𝗺𝗺𝗮𝗻𝗱 and the docker build command ran in the exact same context as the Dockerfile. What I learned is that When migrating legacy pipelines to a modern CI tool, don't just copy-paste the shell commands - verify the execution context of every single step. What is the weirdest bug you have run into while migrating CI/CD platforms? #CICD #DevOpsEngineering #GitHubActions #Jenkins #PlatformEngineering #TechCareers
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development