Easter weekend is when "please don't merge anything" gets tested. Someone ships a "small fix" Friday afternoon. The on-call phone rings Saturday morning. Half the team is debugging instead of doing whatever they planned. The manual freeze: send a Slack message, hope everyone sees it, remind the one person who didn't, and still watch a PR slip through because branch protection doesn't care about your message. NoShip turns "please don't merge" into an actual rule. Set a recurring freeze for holiday weekends and GitHub enforces it. PRs block automatically. No honor system required. If something genuinely needs to go out anyway, the dual-approval override workflow means two people sign off before a single PR gets through. Full audit trail captures who approved it and why. Enjoy the long weekend. #DevOps #GitHub #CodeFreeze #SRE #EngineeringLeadership #Easter #DeploymentSafety
Preventing Merges on Easter Weekend with NoShip
More Relevant Posts
-
It's Friday afternoon. Someone just opened a PR. Your freeze policy says no merges after 3pm Friday. But it's not enforced anywhere. It's just a rule people vaguely know about. So the PR sits there, and someone with merge access makes a judgment call. This is how most "no Friday deploys" policies actually work: tribal knowledge, good intentions, and crossed fingers. Works fine until it doesn't. NoShip turns that policy into a required status check on GitHub. No merges get through during a freeze, full stop. No judgment calls required. You can even ask it in Slack: "freeze all repos every Friday at 3pm for 48 hours" and it'll set the recurring schedule. Done. Have a good weekend. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership
To view or add a comment, sign in
-
If your freeze governance can be bypassed by anyone with merge permissions, it's not governance. It's a policy suggestion. Most teams enforce code freezes through branch protection rules that admins can override. Or worse — a Slack message that says "please don't merge." The problem isn't that people ignore the freeze intentionally. It's that the system allows them to. Real governance means the platform itself prevents unauthorized changes. Not a wiki page. Not a calendar event. Not "please check with the team lead first." At NoShip, we enforce freezes at the GitHub layer — merge blocking via required status checks and deployment blocking via native Deployment Protection Rules. No one merges during a freeze unless they go through a documented, dual-approval emergency override workflow. That's the difference between a policy and a control. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #ChangeManagement #Compliance
To view or add a comment, sign in
-
Hard freeze: the system won't let you merge. Soft freeze: "please don't merge." Guess which one works. Every "Slack-message-and-hope" freeze I've seen eventually gets violated. Sometimes by a well-meaning engineer who missed the thread. Sometimes by a contractor who isn't even in the channel. Sometimes by the merge queue itself, which doesn't read Slack at all. The fix isn't better communication. It's a required status check that says no. NoShip turns your freeze into a GitHub check that blocks merges at the source — across every repo, every branch, every environment. Policy becomes control. No honor system required. #CodeFreeze #DevOps #GitHub #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #ChangeControl
To view or add a comment, sign in
-
Easter Sunday is not the time to find out your deploy pipeline is still open. We've all seen it. A PR gets merged late Friday "just to get it in." By Sunday someone's getting paged. The on-call engineer is not happy. NoShip lets you set a recurring freeze that kicks in automatically every holiday weekend. Define the window once, and GitHub enforces it. No Slack reminders. No honor system. No "I thought someone else handled it." Set it. Forget it. Enjoy the long weekend. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #Easter
To view or add a comment, sign in
-
🚨 Is GitHub's reliability hurting your team? I've been talking with many customers recently, and a common theme keeps coming up — frustration with GitHub's service health. Outages, degraded performance, and uncertainty around uptime are slowing teams down. If that sounds familiar, there's a path forward. In 3 days, I'll be running a free workshop walking through how to migrate from GitHub to GitLab — step by step, no guesswork. You'll leave with a clear migration plan, practical tips, and confidence to make the switch. 👉 Interested? Join us here: https://lnkd.in/d-ckV-9G Quinten Dismukes, Colin Stevenson, Thiago Magro, Adrian Tigert #GitLab #GitHub #DevOps #Migration #Workshop
To view or add a comment, sign in
-
-
𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗚𝗶𝘁𝗛𝘂𝗯 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝘀 𝘀𝗵𝗼𝘂𝗹𝗱𝗻’𝘁 𝗯𝗲 𝘁𝗵𝗶𝘀 𝗽𝗮𝗶𝗻𝗳𝘂𝗹… 𝗯𝘂𝘁 𝗶𝘁 𝗶𝘀. Over the past few months, I kept running into a frustrating issue: using work + personal GitHub accounts on the same machine without breaking SSH or mixing identities. So I built a clean, repeatable SSH setup that solves the following: • Authentication conflicts • Wrong-account commits • Broken push/pull workflows What’s inside the guide: • Separate SSH keys per account • Smart aliasing via ~/.ssh/config • Per-repo Git identity setup • Quick debugging checks The goal was simple: 👉 Make it predictable and production-safe—not just “works on my machine." If you’ve ever pushed code from the wrong account… you know the pain. 😅 🔗 GitHub repo: https://lnkd.in/dFH75WvV If this helps, consider giving the repo a ⭐ #github #git #ssh #developers #webdev #softwareengineering #opensource
To view or add a comment, sign in
-
-
Last week, someone wiped our entire codebase. The whole Bitbucket repository — replaced with a single commit: "Repository cleared." Every branch. Every commit. Every line of history. Gone. And at the same time? For about 30 minutes, I just stared at the screen. Then I got to work. Step 1: I found an old commit hash that was still cached on Bitbucket's servers. Step 2: git fetch origin [that hash] — 2,059 objects came back. Step 3: Force-pushed the recovered code to a new repo. Full codebase? Recovered. We went from "everything is gone" to "everything is back" in the same day. But here's the lesson that actually matters: After the recovery, I sent the team a list of 5 changes we need to make: Branch protection rules — no one pushes directly to main. Pull request reviews before any merge. Minimum 2 admins on every platform . Regular backups — not "we should do this someday," but scheduled. Access review across all platforms. Recovery is great. But prevention is the actual job. The scariest moment wasn't discovering the code was gone. It was realizing we had no safeguards to stop it from happening in the first place. #DevOps #Git #Bitbucket #IncidentResponse #CodeRecovery #SecurityByDesign #IntegrationEngineering #TechLeadership
To view or add a comment, sign in
-
Q2 just started. Is your team ready for the release crunch? Every quarter, the same pattern plays out: feature branches pile up, someone merges to main during a critical deploy window, and the on-call engineer's Friday night is ruined. NoShip gives your team a single source of truth for code freezes — enforced directly through GitHub's required status checks. No honor system. No "please don't merge" messages in Slack. Here's what teams are using to stay safe this quarter: → Recurring freeze schedules (RRULE-powered) so your weekly deploy windows are always protected → Emergency overrides with approval workflows for when you actually need to ship that hotfix → An AI assistant that lets engineers manage freezes with plain English — in Slack or the web dashboard → Full audit trail so you always know who froze what, when, and why Stop relying on calendar reminders and Slack announcements. Start enforcing freeze discipline at the GitHub level. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #Q2Planning #ReleaseManagement
To view or add a comment, sign in
-
Critical Infra like Github having this kind of incidents would be unthinkable 3-4 yrs ago, Also their uptimes lately have not been that great. What changed? I also sympathise with them due to huge amount of code generated by Agents & AI being pushed on a scale couldn't be imagined before but even that shouldn't be an excuse for this kind of errors. Git & Github are too critical for these kind of goof ups.
AI agents + Enterprise architecture | Polyglot | Founder | APIs | Dotnet | Node | Python | RAG | Gen AI
GitHub had an incident where merge queue commits were reverting previously merged commits at random. Crazy. Count yourself lucky if you didn’t get this email.
To view or add a comment, sign in
-
-
The worst part of an unenforced freeze isn't the bad deploy. It's the Slack message the next morning. "Hey, I didn't see the freeze announcement, I merged #2847 last night. Do we need to revert?" Now someone has to investigate. Was it deployed? Did it break anything? Do we revert or roll forward? Is the freeze still on? The person who merged feels bad. Their teammate feels annoyed. The whole thing was avoidable. If the merge button doesn't work during a freeze, none of this happens. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development