Git doesn’t store “changes” the way you think. It stores snapshots of reality over time. And when nothing changes? It simply points to what already exists. That one idea is why massive histories don’t explode in size. A simple concept… with huge impact. Dive deeper 👇 https://lnkd.in/gDgzdUcf #Git #DevOps #SystemThinking #Engineering #TechInsights #SoftwareEngineering #CloudNative #VersionControl #TechCuriosity #OpenSource #TechTrends
Ankit Arora’s Post
More Relevant Posts
-
Great developers don’t guess—they investigate. Logs aren’t just error messages—they’re insights into how your system actually behaves. When you learn to read logs properly, you debug faster, understand deeper, and build more reliable systems. In 2026, the edge isn’t just writing code—it’s understanding what your code is doing in real time. #SoftwareDevelopment #Debugging #TechSkills #DeveloperSkills #ITProfessionals #SystemThinking #FutureOfWork #DevOps #TechCareers #EduRamp
To view or add a comment, sign in
-
Platform engineering sounds great in theory, but without the right guardrails, it can quickly turn into chaos. Too much freedom slows teams down, and too many restrictions kill developer experience. Finding the balance is where the real challenge lies. In this session, Rajan Sharma shares how to design platform guardrails in Kubernetes that actually help teams move faster instead of blocking them. It is about creating systems that enable developers, not control them, while still keeping reliability, security, and scale in check. If you are building or working on platform engineering teams, this is something you should not miss. 📅 May 2, 2026 📍 CogNerd #Kubernetes #PlatformEngineering #DevOps #CloudNative #Kubesimplify
To view or add a comment, sign in
-
-
100MB Files in Git: A Hidden Risk to Repository Performance Large files rarely create immediate issues; but over time, they slow repositories, impact developer productivity, and introduce unnecessary complexity. Addressing this isn’t just about deletion. It requires a controlled approach to rewriting history without disrupting teams or delivery pipelines. This blog outlines how to safely remove 100MB+ files at scale, ensuring cleaner repositories and more reliable development workflows. Read more: https://lnkd.in/g8UKj55V ------------------ Shankar Prasad Jha Sandeep Rawat Yogesh Baatish Arpit Jain Vedant K. Khalid Ahmed Jinesh Koluparambil Buildpiper - By OpsTree ------------------ #Git #DevOps #VersionControl #PlatformEngineering #TechLeadership #EngineeringExcellence #ScalableSystems #DeveloperProductivity
To view or add a comment, sign in
-
-
🔧 DEVOPS UNLOCK #001 🔧 Your pod is stuck in CrashLoopBackOff at 3am. Your on-call alert just fired. Here's the exact runbook that saves you every time. Most engineers waste 20 minutes on "kubectl describe pod" when the real answer is already in the previous container logs. Here's the battle-tested triage sequence: Step 1 — Get the LAST crash logs (not just current): kubectl logs <pod> --previous -n <namespace> Step 2 — Decode exit codes: • Exit 1: App crashed — check stdout carefully • Exit 137: OOMKilled — your memory limits are too tight • Exit 143: SIGTERM unhandled — fix graceful shutdown • Exit 0: App exited cleanly — missing restart policy or loop logic Step 3 — Cross-check resource pressure: kubectl top pod <pod> -n <namespace> kubectl describe node <node> | grep -A 5 "Allocated resources" Step 4 — Catch config & scheduling issues: kubectl get events -n <namespace> --sort-by='.lastTimestamp' | tail -20 Step 5 — If still stuck, exec into a debug sidecar: kubectl debug -it <pod> --image=busybox --target=<container> ⚡ Pro Tip: Add "terminationMessagePolicy: FallbackToLogsOnError" to your pod spec. When containers crash before writing to /dev/termination-log, Kubernetes pulls the last 80 lines of stderr instead. Saved me during a silent OOM crash that left zero traces in termination logs. What's your go-to CrashLoopBackOff survival move? Drop it below 👇 #DevOps #Kubernetes #SRE #PlatformEngineering #K8s #Containers #CloudNative #DevOpsUnlock
To view or add a comment, sign in
-
Direct git pulls in production = guaranteed downtime. Use staging directories and atomic deployments for zero-downtime updates. Your users will thank you. #WebDev #DevOps #HostMyCode
To view or add a comment, sign in
-
-
“It worked in dev… and that’s exactly why it scared me” A few weeks ago, we had a release Everything checked out: Same Docker image Same pipeline No risky changes We had already tested it in dev and staging. No issues. So we pushed to production thinking this would be a non-event. It wasn’t. What started happening Nothing broke immediately. Which, honestly, made it worse. After some time: A couple of APIs started timing out One service behaved… strangely (not failing, just inconsistent) Logs didn’t show anything obvious At first, it felt like one of those “maybe it’ll settle” situations. It didn’t. What confused us We kept going back to the same thought: “But this exact setup worked in staging…” Same image. Same configs (or so we thought). So why was production acting differently? What we eventually found After digging way deeper than expected, the issue wasn’t in the code at all. Production had quietly drifted. One environment variable was different A dependency version wasn’t exactly the same And someone (months ago) had patched something directly in prod Nothing big individually. But together, it changed behavior. That’s what got us. What we changed after that We didn’t just fix the issue and move on. That would’ve been a mistake. We tightened a few things: Moved everything we could into Terraform Standardized deployments using Docker (no environment-specific builds) Cleaned up configs and started managing them properly (used Ansible for consistency) And the biggest one: 👉 No more direct changes in production. If it’s not in code, it doesn’t exist. What stuck with me I used to think: “If it works in staging, we’re safe” Now I think: “How sure are we that staging is actually the same as prod?” Because most of the time… it isn’t. #DevOps #Terraform #Docker #Ansible #InfrastructureAsCode #CloudEngineering #SRE #LearningInPublic #RealWorldDevOps
To view or add a comment, sign in
-
🚀 Real DevOps is not about “green builds”… it’s about responsibility. A pipeline can pass ✔️ But the real questions are: 👉 Did we deliver on time? 👉 Is the system stable for users? 👉 Are we deploying safely? 👉 Are we using resources wisely? Because in the end, users don’t care about builds… they care about experience. 💡 Good engineering = smart decisions + real impact. #DevOps #Jenkins #CI_CD #SoftwareEngineering #BuildInPublic #Developers
To view or add a comment, sign in
-
-
Are your devs sick and tired of figuring out the setup instead of 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 building features?? One repo has one way of deploying, another has it differently Pipelines felt like vibes a different setup every time Chasing down credentials just do get something deployed You end up trying to learn a system that has nothing to do with your code, this is the type of friction that slows teams down. And this is what platform engineering is here to solve. Whilst building my EKS setup this became one of the main focus areas, its nice and all to have a service running but if its not usable then you really have a problem. Modular Terraform so infra isn't rebuilt every time Github Actions with the same template so deployemnts follow the same flow OIDC so no one is dealing with credentials Same structure across environments so everything feels familiar I can't stress how important this, developers need their lives to be easier so they can focus on code, it increases their productivity and overall moral within the team improves. Imagine jumping through hoops just to get to your main job your paid to do, it's exhausting! As platform engineers we're here to make things predictable so engineers don't have to stop and think everytime they want to build CoderCo #devops #platformengineering #coderco
To view or add a comment, sign in
-
-
The most dangerous sentence in software development: “It’s just a small change.” Especially when it’s pushed at 4:59 PM on a Friday. If you're a developer, you already know how this story ends😅 #DeveloperHumor #SoftwareEngineering #DevOps #CICD
To view or add a comment, sign in
-
-
I watched a $2M system go dark at 3 AM because one engineer skipped revisionHistoryLimit. He tried to roll back. Kubernetes said: error: no rollout history found There was no undo button. There was no history. There was nothing. here's the deployment checklist I never skip: 1 kubectl apply -f only. Never kubectl set image in prod. The YAML is truth. The cluster is a mirror. 2 Always set revisionHistoryLimit. That command only works if you configured history to exist. kubectl rollout undo deployment/app --to-revision=3 3 Never walk away from a rollout restart. 10% bad build caught early beats 100% crashed traffic. kubectl rollout status deployment/app Senior engineering isn't knowing every command. It's designing for the moment everything breaks. What production scar permanently changed how you deploy? Drop it below. #Kubernetes #SRE #DevOps #CloudNative #EngineeringLeadership
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development