We cut our deployment time from 47 minutes to 9 minutes using GitHub Actions. Here is what actually moved the needle. Not the flashy stuff. The boring stuff. 1. We stopped running the full test suite on every commit. Using pytest -k with changed-file detection, we ran only relevant tests. Saved ~11 minutes immediately. 2. We parallelised Docker layer caching properly. We were using cache, but pulls were still sequential in our workflow. Fixing that shaved off another 6–7 minutes. 3. We removed a manual approval gate that had been sitting in our pipeline since a production incident in 2022. No one on the team of 5 engineers could explain why it still existed. 4. We built a shared base image for our microservices instead of each service installing the same ~350MB of dependencies separately. The bottleneck in your pipeline is almost never where you think it is. Profile it first. Then fix it. What is the biggest time sink in your current pipeline? #CICD #DevOps #PlatformEngineering #GitHubActions #Docker
Cut deployment time from 47 minutes to 9 minutes with GitHub Actions
More Relevant Posts
-
CI/CD shouldn’t be this annoying. Yet somehow, every time I start a new repo, I end up: • rewriting the same GitHub Actions workflows • copying YAML from old projects (and hoping it still works) • debugging pipelines that fail for non-obvious reasons • dealing with slightly different setups across repos • spending more time maintaining pipelines than shipping code None of these are hard problems — just repetitive ones that add up. So I started putting together NERV-Actions. The goal isn’t to reinvent CI/CD. It’s to remove the friction: a small, reusable set of actions that makes pipelines more consistent, easier to plug in, and less painful to maintain. Still a work in progress, but it’s already reducing a lot of the “why is this pipeline different again?” moments. If this sounds familiar, you can check it out here: 👉 https://lnkd.in/gXJJNxTr Curious if others are feeling the same pain with GitHub Actions. #GitHubActions #CICD #DevOps #SoftwareEngineering
To view or add a comment, sign in
-
I learned something new today!! This diagram helped me understand how modern applications actually move from code → production using tools like Jenkins and Docker. Here’s the flow in simple terms: ▪️ 1. Pull Code Jenkins fetches code from GitHub ▪️ 2. Verify Basic checks to ensure everything is correct ▪️ 3. Build Images Docker builds application images ▪️ 4. Push to DockerHub Images are stored in a central registry ▪️ 5. Deploy Containers are started using Docker Compose ▪️ 6. Cleanup Unused images are removed to save space What I realized: CI/CD is not just automation — it’s about making deployments fast, consistent, and reliable. This is where development meets real-world production systems If you're learning backend or full stack, understanding pipelines like this is a game changer. What part of CI/CD do you find most confusing? 🤔 #DevOps #Jenkins #Docker #CICD #BackendDevelopment #FullStack #SoftwareEngineering #CodingJourney
To view or add a comment, sign in
-
-
Day 21 & 22 of #90DaysOfDevOps ✅ 𝐆𝐢𝐭 + 𝐆𝐢𝐭𝐇𝐮𝐛 & 𝐃𝐨𝐜𝐤𝐞𝐫 - revision and going deeper. 🔁 Revisited the fundamentals, then pushed further into what actually matters at an industry level. 𝐆𝐢𝐭 & 𝐆𝐢𝐭𝐇𝐮𝐛: Branch protection rules, CODEOWNERS, Dependabot, GitHub's built-in secret scanning and CodeQL. Also covered Git internals - blobs, trees, packfiles, reflog - and the difference between Gitflow and trunk-based development. 𝐃𝐨𝐜𝐤𝐞𝐫: Docker networking (bridge, host, overlay, user-defined), Docker Scout for CVE scanning and SBOM generation, image optimisation with multi-stage builds, container security hardening (non-root, capability dropping, resource limits), and image signing with Cosign. Key takeaway: revision isn't just repetition - it's the layer where fundamentals turn into production knowledge. #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Git #GitHub #Docker #DevSecOps #DevOps
To view or add a comment, sign in
-
-
Ever had your entire project architecture almost destroyed the day before the final review? That was me today with CodeRunner, where I am building a scalable contest hosting site and remote code execution engine. I spent weeks designing it with microservices, but a teammate accidentally refactored the whole thing into a monolith and tried to push it directly to the repo. The only reason our work survived is because I had locked down the main branch. By blocking force pushes and requiring mandatory PR reviews, the system automatically rejected their changes. You should also use GitHub rulesets in your repos to avoid such situations and protect your code! It provides many options to save your branches, as you can see in the pic below where I created a ruleset for main branch protection. #BuildInPublic #SystemDesign #DevOps #Microservices #CodeRunner
To view or add a comment, sign in
-
-
⚙️ #PythonJourney | Day 158 — CI/CD: Automation That Saves Lives Added GitHub Actions to the project. Now every push runs tests automatically. If something breaks, I know immediately. If it passes, I have confidence to deploy. 14 tests running in 44 seconds. Green or red. No surprises. What I learned: → CI/CD isn't optional, it's essential → Catching bugs early beats finding them in production → Automated testing gives peace of mind → GitHub Actions is simple but powerful It's simple, but it changes everything. #DevOps #GitHub #CI #CD #Automation #Backend #Testing
To view or add a comment, sign in
-
-
Ever faced this in Kubernetes? 👇 Everything was working fine yesterday… Today, something feels off. No crashes. No alerts. But things are breaking. 👉 Requests failing 👉 Latency increasing 👉 Random issues showing up And the worst part? No one knows what changed. This is what I call ⚙️ Configuration Drift Small changes like: • Env variable updates • ConfigMap tweaks • Secret rotations • Partial deployments Individually harmless… But together → production issues 💬 Curious - how do you debug this today? Because most teams: → Compare configs manually → Check logs (no clear answer) → Spend hours guessing That’s exactly why I built KubeGraf: 👉 Tracks every config & deployment change 👉 Correlates it with system issues 👉 Pinpoints what changed & why it broke 👉 Suggests safe rollback or fix Instead of “what went wrong?” You get → “this change caused the issue” 💡 https://kubegraf.io #Kubernetes #DevOps #CloudNative #K8s #SRE #Debugging #Observability #IncidentResponse #RootCauseAnalysis #Microservices #KubeGraf #DevTools
To view or add a comment, sign in
-
Most Docker tutorials stop at docker run. That’s exactly where production problems begin. I learned this the hard way. A base image CVE sitting in production, not caught by the pipeline, flagged hours later in an audit. The image had been running fine. The vulnerability hadn’t. I just didn’t know. That experience changed how I think about container delivery. It’s not enough to build an image that works. It needs to be minimal, verified, signed, and scanned, before it ever touches a registry. So I built a reference project that codifies exactly that. Here’s what I changed after that audit: Distroless final image. No shell, no package manager, ~4MB. The base image CVE that got us? No longer possible. There’s almost nothing left to exploit. Trivy scans every image before push. The pipeline fails on HIGH/CRITICAL, not a Slack notification you’ll read tomorrow. Not advisory. A hard stop. SBOM generated at build time. Image signed with cosign keyless signing. No private key to manage, signature tied to the GitHub Actions OIDC identity. You can prove exactly what was built and who built it. The CI/CD pipeline does two different things depending on context: On PRs: source scan, build amd64 locally, scan the loaded image. No registry push. No packages: write on untrusted code. On main/tags: multi-arch build, push, scan the exact digest (not the tag, tags are mutable), sign. One deliberate trade-off I documented: Release runs two builds, validation and publish. Slower. But the permission separation is clean, and clean pipelines don’t surprise you at 2am. Every decision has an ADR. Every operational scenario has a runbook entry. Because the person debugging this might be me. → https://lnkd.in/dUMiQCta If you’re building container delivery pipelines, what does your image scanning gate look like? Before push, after push, or both? #Docker #DevOps #CICD #PlatformEngineering #Security #Kubernetes
To view or add a comment, sign in
-
🔥How to troubleshoot a Docker container that keeps restarting: ✅1. Check the logs: docker logs <container> --tail 50 ✅2. Check the exit code: docker inspect <container> | grep ExitCode Exit 0 = Clean stop Exit 1 = Application error Exit 137 = Killed (OOM or manual) Exit 139 = Segfault ✅3. Check resource limits: docker stats <container> ✅4. Run it interactively: docker run -it <image> /bin/sh Most restart loops are either OOM kills or application config errors. #DEVOPS
To view or add a comment, sign in
-
Your Kubernetes cluster is lying to you. And you won't find out until prod breaks. Here's a problem most platform engineers don't talk about enough: Config drift across environments. Everything looks identical — dev, staging, prod. Same Helm charts. Same GitOps repo. Same manifests. Then prod goes down. And you spend 3 hours figuring out why staging never caught it. Here's what actually happened: Someone patched a ConfigMap directly on the prod cluster with "kubectl edit" during last month's incident. Just a quick fix. "I'll raise a PR later." They didn't. Now prod is running a config that exists nowhere in Git. Your GitOps tool (ArgoCD, Flux — doesn't matter) shows everything as Synced because drift detection only works if the live state diverges from what's currently in Git. But the patch was never in Git to begin with. This is the gap nobody warns you about: - GitOps doesn't protect you from changes that never entered Git - kubectl diff only compares against what's applied, not what should exist - Multi-cluster setups multiply this problem — 5 clusters, 5 different "versions of truth" - The longer it goes undetected, the harder the blast radius when it surfaces The fix isn't just "don't use kubectl edit" — that battle is already lost in most orgs. The real fix is drift detection as a first-class concern: - Enable ArgoCD's self-heal and prune flags so live state is continuously reconciled - Run kubectl diff in your CI pipeline before every deploy, not just locally - Set up audit logging on your clusters — who ran kubectl commands, and when - Tools like Kyverno or Datree can flag live state mismatches proactively - Treat your cluster state like a database — no manual writes, ever The hardest part isn't the tooling. It's the culture shift of making "I'll fix it in Git later" completely unacceptable. Because in a fast-moving team, "later" is when prod burns. Been burned by config drift before? Drop it in the comments. #Kubernetes #DevOps #PlatformEngineering #GitOps #K8s #SRE #CloudNative
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development