If your freeze governance can be bypassed by anyone with merge permissions, it's not governance. It's a policy suggestion. Most teams enforce code freezes through branch protection rules that admins can override. Or worse — a Slack message that says "please don't merge." The problem isn't that people ignore the freeze intentionally. It's that the system allows them to. Real governance means the platform itself prevents unauthorized changes. Not a wiki page. Not a calendar event. Not "please check with the team lead first." At NoShip, we enforce freezes at the GitHub layer — merge blocking via required status checks and deployment blocking via native Deployment Protection Rules. No one merges during a freeze unless they go through a documented, dual-approval emergency override workflow. That's the difference between a policy and a control. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #ChangeManagement #Compliance
Enforce Code Freezes with Real Governance
More Relevant Posts
-
Easter weekend is when "please don't merge anything" gets tested. Someone ships a "small fix" Friday afternoon. The on-call phone rings Saturday morning. Half the team is debugging instead of doing whatever they planned. The manual freeze: send a Slack message, hope everyone sees it, remind the one person who didn't, and still watch a PR slip through because branch protection doesn't care about your message. NoShip turns "please don't merge" into an actual rule. Set a recurring freeze for holiday weekends and GitHub enforces it. PRs block automatically. No honor system required. If something genuinely needs to go out anyway, the dual-approval override workflow means two people sign off before a single PR gets through. Full audit trail captures who approved it and why. Enjoy the long weekend. #DevOps #GitHub #CodeFreeze #SRE #EngineeringLeadership #Easter #DeploymentSafety
To view or add a comment, sign in
-
It's Friday afternoon. Someone just opened a PR. Your freeze policy says no merges after 3pm Friday. But it's not enforced anywhere. It's just a rule people vaguely know about. So the PR sits there, and someone with merge access makes a judgment call. This is how most "no Friday deploys" policies actually work: tribal knowledge, good intentions, and crossed fingers. Works fine until it doesn't. NoShip turns that policy into a required status check on GitHub. No merges get through during a freeze, full stop. No judgment calls required. You can even ask it in Slack: "freeze all repos every Friday at 3pm for 48 hours" and it'll set the recurring schedule. Done. Have a good weekend. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership
To view or add a comment, sign in
-
Last week, someone wiped our entire codebase. The whole Bitbucket repository — replaced with a single commit: "Repository cleared." Every branch. Every commit. Every line of history. Gone. And at the same time? For about 30 minutes, I just stared at the screen. Then I got to work. Step 1: I found an old commit hash that was still cached on Bitbucket's servers. Step 2: git fetch origin [that hash] — 2,059 objects came back. Step 3: Force-pushed the recovered code to a new repo. Full codebase? Recovered. We went from "everything is gone" to "everything is back" in the same day. But here's the lesson that actually matters: After the recovery, I sent the team a list of 5 changes we need to make: Branch protection rules — no one pushes directly to main. Pull request reviews before any merge. Minimum 2 admins on every platform . Regular backups — not "we should do this someday," but scheduled. Access review across all platforms. Recovery is great. But prevention is the actual job. The scariest moment wasn't discovering the code was gone. It was realizing we had no safeguards to stop it from happening in the first place. #DevOps #Git #Bitbucket #IncidentResponse #CodeRecovery #SecurityByDesign #IntegrationEngineering #TechLeadership
To view or add a comment, sign in
-
𝗠𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗚𝗶𝘁𝗛𝘂𝗯 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝘀 𝘀𝗵𝗼𝘂𝗹𝗱𝗻’𝘁 𝗯𝗲 𝘁𝗵𝗶𝘀 𝗽𝗮𝗶𝗻𝗳𝘂𝗹… 𝗯𝘂𝘁 𝗶𝘁 𝗶𝘀. Over the past few months, I kept running into a frustrating issue: using work + personal GitHub accounts on the same machine without breaking SSH or mixing identities. So I built a clean, repeatable SSH setup that solves the following: • Authentication conflicts • Wrong-account commits • Broken push/pull workflows What’s inside the guide: • Separate SSH keys per account • Smart aliasing via ~/.ssh/config • Per-repo Git identity setup • Quick debugging checks The goal was simple: 👉 Make it predictable and production-safe—not just “works on my machine." If you’ve ever pushed code from the wrong account… you know the pain. 😅 🔗 GitHub repo: https://lnkd.in/dFH75WvV If this helps, consider giving the repo a ⭐ #github #git #ssh #developers #webdev #softwareengineering #opensource
To view or add a comment, sign in
-
-
Hard freeze: the system won't let you merge. Soft freeze: "please don't merge." Guess which one works. Every "Slack-message-and-hope" freeze I've seen eventually gets violated. Sometimes by a well-meaning engineer who missed the thread. Sometimes by a contractor who isn't even in the channel. Sometimes by the merge queue itself, which doesn't read Slack at all. The fix isn't better communication. It's a required status check that says no. NoShip turns your freeze into a GitHub check that blocks merges at the source — across every repo, every branch, every environment. Policy becomes control. No honor system required. #CodeFreeze #DevOps #GitHub #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #ChangeControl
To view or add a comment, sign in
-
𝗚𝗶𝘁 𝗕𝗿𝗮𝗻𝗰𝗵𝗶𝗻𝗴 𝗶𝗻 𝗗𝗲𝘃𝗢𝗽𝘀, 𝗪𝗵𝗮𝘁 𝘁𝗵𝗲 𝗘𝗿𝗿𝗼𝗿𝘀 𝗧𝗮𝘂𝗴𝗵𝘁 𝗠𝗲 𝗠𝗼𝗿𝗲 𝗧𝗵𝗮𝗻 𝘁𝗵𝗲 𝗧𝗮𝘀𝗸 𝗜𝘁𝘀𝗲𝗹𝗳 Today I completed a Git branching task on KodeKloud. The goal was simple, create a new branch 𝐱𝐟𝐮𝐬𝐢𝐨𝐧𝐜𝐨𝐫𝐩_𝐦𝐞𝐝𝐢𝐚 from master on a shared Linux server, and push it to the remote repository without touching a single line of code. Sounds easy, right? 😅 Here's what actually happened: I ran into THREE errors back to back: 🔴 fatal: detected dubious ownership, Git didn't trust the directory because it was owned by a different user 🔴 Permission denied on .git/index.lock — I didn't have write access to the repo 🔴 remote unpack failed: unable to create temporary object directory, the remote bare repo had the same permission issue I fixed each one, pushed the branch, saw * [𝐧𝐞𝐰 𝐛𝐫𝐚𝐧𝐜𝐡] 𝐱𝐟𝐮𝐬𝐢𝐨𝐧𝐜𝐨𝐫𝐩_𝐦𝐞𝐝𝐢𝐚 → xfusioncorp_media in my terminal... and STILL failed twice before finally getting that green tick. 🥲 But here is what stopped me cold when I thought about it deeper 👇 𝗜𝗻 𝗿𝗲𝗮𝗹 𝗽𝗿𝗼𝗱𝘂𝗰𝘁𝗶𝗼𝗻, 𝘁𝗵𝗲𝘀𝗲 𝘀𝗮𝗺𝗲 𝗲𝗿𝗿𝗼𝗿𝘀 𝗮𝗿𝗲 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗶𝗻𝗰𝗼𝗻𝘃𝗲𝗻𝗶𝗲𝗻𝗰𝗲𝘀, 𝘁𝗵𝗲𝘆 𝗮𝗿𝗲 𝗼𝘂𝘁𝗮𝗴𝗲𝘀 𝘄𝗮𝗶𝘁𝗶𝗻𝗴 𝘁𝗼 𝗵𝗮𝗽𝗽𝗲𝗻. 👉 If a DevOps engineer panics and runs sudo chown -R or chmod 777 on a shared Git repository to "just make it work," they can silently corrupt ownership permissions across the entire codebase. Other team members lose access. CI/CD pipelines break. Deployments fail at 2am. 👉 A developer who can't push a hotfix branch because of a permission error during a production incident is every engineering team's nightmare.😅 👉 Git's dubious ownership warning exists for a REASON, it's a security feature introduced to protect shared servers from directory hijacking attacks. Ignoring it carelessly is a vulnerability, not a fix. The lesson? In DevOps, how you fix something matters just as much as fixing it. Quick hacks in a lab are learning moments. Quick hacks in production are incident reports. I failed this task twice, fixed the root cause, and passed on the third try. That's not failure, that's exactly how engineers are built. Do you get?? #DevOps #Git #Linux #KodeKloud #WomenInTech #CloudEngineering #DevOpsJourney #Growth
To view or add a comment, sign in
-
-
GitHub Token Tester Launches Granular Permission Enumeration Tool 📌 A new GitHub Token Tester tool reveals exactly what permissions each token holds-no more guessing or trial-and-error. Perfect for DevOps and security pros managing complex auth landscapes, it audits fine-grained access levels critical for secure, least-privilege setups. Whether testing classic PATs or new fine-grained tokens, this utility streamlines compliance and risk reduction. 🔗 Read more: https://lnkd.in/dpU3qmg6 #Githubtokentester #Apiaccess #Tokenauditing #Granularpermissions
To view or add a comment, sign in
-
Q2 just started. Is your team ready for the release crunch? Every quarter, the same pattern plays out: feature branches pile up, someone merges to main during a critical deploy window, and the on-call engineer's Friday night is ruined. NoShip gives your team a single source of truth for code freezes — enforced directly through GitHub's required status checks. No honor system. No "please don't merge" messages in Slack. Here's what teams are using to stay safe this quarter: → Recurring freeze schedules (RRULE-powered) so your weekly deploy windows are always protected → Emergency overrides with approval workflows for when you actually need to ship that hotfix → An AI assistant that lets engineers manage freezes with plain English — in Slack or the web dashboard → Full audit trail so you always know who froze what, when, and why Stop relying on calendar reminders and Slack announcements. Start enforcing freeze discipline at the GitHub level. #DevOps #GitHub #CodeFreeze #SRE #PlatformEngineering #DeploymentSafety #EngineeringLeadership #Q2Planning #ReleaseManagement
To view or add a comment, sign in
-
🚨 Is GitHub's reliability hurting your team? I've been talking with many customers recently, and a common theme keeps coming up — frustration with GitHub's service health. Outages, degraded performance, and uncertainty around uptime are slowing teams down. If that sounds familiar, there's a path forward. In 3 days, I'll be running a free workshop walking through how to migrate from GitHub to GitLab — step by step, no guesswork. You'll leave with a clear migration plan, practical tips, and confidence to make the switch. 👉 Interested? Join us here: https://lnkd.in/d-ckV-9G Quinten Dismukes, Colin Stevenson, Thiago Magro, Adrian Tigert #GitLab #GitHub #DevOps #Migration #Workshop
To view or add a comment, sign in
-
-
🚨 Issues with #GitHub today? We’re seeing instability across the platform: ❌ Push & pull delays ❌ Pull Requests not loading ❌ Actions (CI/CD) failing or stuck ❌ Overall slow performance This is not a local issue — it’s affecting multiple environments. 💡 What I did (and what I recommend): I moved to running my own Git server using Gitea Open Source — and honestly, this is something more teams should consider. https://git.xdeye.com/ 👉 Here’s the practical advice: ✔️ Keep a self-hosted Git backup (Gitea / GitLab / bare repo). ✔️ Push your code to multiple remotes (GitHub + your own server). ✔️ Don’t depend fully on GitHub Actions — have manual or server-based deployment ready. ✔️ Keep production deployment independent from third-party outages. ✔️ Automate locally or on your own server where possible. Now my workflow is: Local → self-hosted Git → live servers GitHub is secondary, not critical ⚠️ With the growing use of AI tools and third-party automation inside CI/CD pipelines, complexity and risk are increasing. When one piece fails, everything can break. Better to stay in control. How are you handling redundancy in your Git workflow? #GitHub #DevOps #SelfHosted #Gitea #CI #CD #Security #ITInfrastructure
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development