🚨 DevOps Learning: When GitHub Rejects Your Push Because of Large Files Today I ran into an interesting Git issue while pushing my DevSecOps project to GitHub from an EC2 instance. Everything looked fine locally, but GitHub rejected my push with this error: GH001: Large files detected The reason? Some binaries were accidentally committed to the repository: • argocd-linux-amd64 (205 MB) • awscliv2.zip (63 MB) • kubectl (55 MB) GitHub limits file sizes: ⚠ Recommended: 50 MB ❌ Maximum: 100 MB So the push failed. 🔧 The Fix 1️⃣ Remove the files from Git tracking git rm --cached <file> 2️⃣ Add them to .gitignore 3️⃣ Clean the Git history because large files still exist in previous commits git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch <file>' --prune-empty --tag-name-filter cat -- --all 4️⃣ Force push the cleaned history git push origin main --force 💡 DevOps Best Practice Never commit binaries like: • kubectl • awscli zip files • ArgoCD binaries Instead install them via scripts or package managers in your setup pipeline. This was a great reminder that Git tracks history, not just current files. Every small issue in DevOps is a learning opportunity 🚀 #DevOps #Git #GitHub #Kubernetes #ArgoCD #Terraform #LearningByDoing
GitHub Push Rejected Due to Large Files: Fix and Best Practices
More Relevant Posts
-
Learning Docker for DevOps engineer 🐳 As someone transitioning into DevOps, I knew I needed to really understand containerization—not just follow tutorials, but build something real. Project: https://lnkd.in/dKuRrRrm 🎯 What I Built: ✅ Multi-container app (Java + Node.js + PostgreSQL + Nexus) ✅ Docker Compose orchestration ✅ Multi-stage builds (800MB → 250MB!) ✅ Custom networks for service isolation ✅ Persistent volumes (learned this the hard way) ✅ Deployed to DigitalOcean droplets 💡 Concepts That Clicked: 🔹 Containers ≠ VMs Completely different paradigm. VMs virtualize hardware. Containers virtualize the OS. 🔹 Multi-stage builds Build dependencies don't belong in production images. My Java app dropped from 800MB to 250MB. 🔹 Docker networks Services discover each other by name. Java app reaches Nexus at `http://nexus:8081`. No IP configs needed. 🔹 Volumes save lives Lost my entire Nexus repository once when I restarted a container. Volumes = data that survives. 📚 Learning Journey: Week 1: Breaking everything "Why does my container exit immediately?" "Where's my database data?" "How do containers communicate?" Week 2: Everything clicks Multi-stage builds, networks, volumes—it all makes sense now. 🛠️ Tech Stack: 🐳 Docker & Docker Compose ☕ Java (Maven) 🟢 Node.js 🐘 PostgreSQL 📦 Nexus Repository 🔧 Nginx ☁️ DigitalOcean 🎓 Skills Gained: - Writing efficient Dockerfiles - Orchestrating multi-container apps - Managing persistent data - Container networking - Cloud deployment (DigitalOcean) - Debugging containerized apps 📖 Project Includes: ✓ Documented Dockerfiles (with WHY, not just WHAT) ✓ Docker Compose setup ✓ Volume & networking examples ✓ DigitalOcean deployment guide ✓ Mistakes I made + fixes ✓ Security basics 💭 Real Talk: This is a learning project, not production-ready. But it gave me hands-on experience with Docker concepts that matter in DevOps. Learning by building beats following tutorials every time. 🎯 Next Steps: - Kubernetes orchestration - CI/CD with Jenkins - Terraform for IaC - Monitoring setup For anyone learning DevOps: build something, break it, fix it, repeat. That's how concepts stick. Check it out: https://lnkd.in/dKuRrRrm Fellow learners: What project made Docker click for you? 👇 #DevOps #Docker #LearningInPublic #Containerization #CloudEngineering #CareerTransition
To view or add a comment, sign in
-
🚀 From Learning to Running My First Docker Container Today! 🐳 Today I took another step in my DevOps learning journey by exploring Docker, one of the most widely used tools in modern application deployment. One common challenge developers face is: 👉 It works on my machine, but not on the server. Today I understood how Docker solves this problem by using containers. 💡 What is Docker? Docker is a containerization platform that packages an application along with all its dependencies into a lightweight container, ensuring the application runs consistently across different environments. 📚 Key Concepts I Learned Today 🔹 Docker Image – Blueprint used to create containers 🔹 Docker Container – Running instance of an image 🔹 Dockerfile – Script used to build Docker images 🔹 Docker Hub – Registry to store and share images 🔹 Port Mapping – Connecting the host machine to container services ⚙️ Hands-on Commands I Practiced docker --version docker pull nginx docker images docker run -d -p 8080:80 nginx docker ps docker ps -a docker logs <container_id> docker stop <container_id> docker rm <container_id> 🔗 Practical Experiment I Did I successfully ran an Nginx container and connected it with my host machine using port mapping. After running the container, I accessed it in my browser using: 👉 http://localhost:8080 Seeing the container run successfully and accessing it from the browser was a great hands-on learning experience. #Docker #DevOps #Containerization #LearningInPublic #CloudComputing #TechJourney #FutureDevOpsEngineer
To view or add a comment, sign in
-
-
🚀 BUILDING IN PUBLIC | PART-2 | How I Built code quality gates and kubernetes in CI/CD Pipeline — And What It Taught Me A few months ago, the term "CI/CD pipeline" felt intimidating. Today, I can build one from scratch. Here's what I learned 👇 What is a CI/CD Pipeline? It's the backbone of modern software delivery — automating the journey of code from a developer's laptop to a live production environment, without manual intervention. The stages I learned to build: 🔹 Source — Code pushed to GitHub triggers everything. No push, no pipeline. 🔹 Build — The code gets compiled, dependencies installed, Docker image created. 🔹 Test — Automated tests run. If they fail, the pipeline stops. No broken code moves forward. 🔹 Deploy — The image gets pushed to a registry and deployed to the target environment — whether that's a cloud server or a Kubernetes cluster. Tools I got hands-on with: → Git & GitHub for version control → Docker for containerization → Jenkins / GitHub Actions for automation → Kubernetes for orchestration → Linux as the foundation for everything The biggest lesson? CI/CD isn't just a tool — it's a mindset. Ship small, ship fast, catch errors early. Every failed pipeline taught me more than a successful one ever did. 📌 This is Part 3 of my DevOps learning series. Part 2 is coming soon — Monitoring & Observability. I'll be covering Prometheus, Grafana, alerting, and how to actually know when your system is breaking before your users do. Follow along if you're on the same journey 🙌 Drop a comment — are you also learning DevOps? Let's connect! #DevOps #CICD #CloudComputing #Kubernetes #Docker #Linux #AWS #LearningInPublic #DevOpsEngineer #CloudEngineer
To view or add a comment, sign in
-
-
Just finished building a hands-on CI/CD learning project with GitHub Actions and Docker, So I created a small Node.js app and wired up a full 3-stage pipeline: ✅ Run tests automatically on every push 🐳 Build a Docker image if tests pass 🚀 Deploy to a VPS via SSH if the image builds No manual SSH. No "deploy and pray 🙏🥹" Just push to main and let the pipeline do the work. Also wrote detailed notes covering DevOps fundamentals, the CALMS framework, Docker deep dives, and GitHub Actions from scratch — because understanding the why matters as much as the how. https://lnkd.in/g67akgKU
To view or add a comment, sign in
-
🚀 𝗪𝗵𝗮𝘁 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗛𝗮𝗽𝗽𝗲𝗻𝘀 𝗪𝗵𝗲𝗻 𝗬𝗼𝘂 𝗣𝘂𝘀𝗵 𝗖𝗼𝗱𝗲 𝘁𝗼 𝗚𝗶𝘁𝗛𝘂𝗯? Most beginners think “𝗴𝗶𝘁 𝗽𝘂𝘀𝗵” 𝗷𝘂𝘀𝘁 𝘂𝗽𝗹𝗼𝗮𝗱𝘀 𝗰𝗼𝗱𝗲. But in real DevOps environments… That single command can 𝘁𝗿𝗶𝗴𝗴𝗲𝗿 𝗮𝗻 𝗲𝗻𝘁𝗶𝗿𝗲 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲. 💼 𝗜𝗻 𝗿𝗲𝗮𝗹 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀... When developers push code to GitHub, it often starts a 𝗖𝗜/𝗖𝗗 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄. 𝗧𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄 𝗺𝗮𝘆 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗹𝘆: • Run automated tests • Build the application • Scan for vulnerabilities • Build Docker images • Deploy to staging or production So a simple 𝗽𝘂𝘀𝗵 𝗰𝗮𝗻 𝘁𝗿𝗶𝗴𝗴𝗲𝗿 an entire 𝘀𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝘆 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲. ⚙️ 𝗪𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝘀𝘁𝗲𝗽-𝗯𝘆-𝘀𝘁𝗲𝗽? 1️⃣ Developer writes code locally 2️⃣ Code is committed with git commit 3️⃣ Code is pushed to GitHub with git push 4️⃣ GitHub stores the new commit in the repository 5️⃣ Webhooks trigger CI tools (Jenkins, GitHub Actions, etc.) 6️⃣ CI pipeline starts build + tests 7️⃣ Artifacts are created (Docker image, binaries) 8️⃣ CD pipeline may deploy automatically This is how 𝗺𝗼𝗱𝗲𝗿𝗻 𝗗𝗲𝘃𝗢𝗽𝘀 𝘁𝗲𝗮𝗺𝘀 𝘀𝗵𝗶𝗽 𝗰𝗼𝗱𝗲 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝘁𝗶𝗺𝗲𝘀 𝗽𝗲𝗿 𝗱𝗮𝘆. 🧠 𝗦𝗶𝗺𝗽𝗹𝗲 𝗮𝗻𝗮𝗹𝗼𝗴𝘆 • Think of GitHub like a switch that starts a factory machine. • You press the switch (git push) • And suddenly the factory starts: • Code → Build → Test → Package → Deploy ❌ 𝗖𝗼𝗺𝗺𝗼𝗻 𝗺𝗶𝘀𝘁𝗮𝗸𝗲 𝗯𝗲𝗴𝗶𝗻𝗻𝗲𝗿𝘀 𝗺𝗮𝗸𝗲 They think: 𝗚𝗶𝘁𝗛𝘂𝗯 = 𝗼𝗻𝗹𝘆 𝗰𝗼𝗱𝗲 𝘀𝘁𝗼𝗿𝗮𝗴𝗲. ❌ 𝗡𝗼𝘁 𝘁𝗿𝘂𝗲. GitHub is also the 𝗲𝘃𝗲𝗻𝘁 𝘁𝗿𝗶𝗴𝗴𝗲𝗿 𝗳𝗼𝗿 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗼𝗻 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀. 🎯 𝗜𝗳 𝘆𝗼𝘂 𝗿𝗲𝗺𝗲𝗺𝗯𝗲𝗿 𝗢𝗡𝗘 𝘁𝗵𝗶𝗻𝗴 • git push is not just uploading code. • It can start the entire DevOps delivery pipeline. 💬 𝗛𝗼𝘄 𝗺𝗮𝗻𝘆 𝘁𝗶𝗺𝗲𝘀 𝗽𝗲𝗿 𝗱𝗮𝘆 𝗱𝗼𝗲𝘀 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺 𝗽𝘂𝘀𝗵 𝗰𝗼𝗱𝗲? 𝗙𝗼𝗹𝗹𝗼𝘄 𝗼𝘂𝗿 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻 𝗣𝗮𝗴𝗲 𝗳𝗼𝗿 𝗱𝗮𝗶𝗹𝘆 𝗰𝗹𝗼𝘂𝗱 𝗰𝗹𝗮𝗿𝗶𝘁𝘆: https://lnkd.in/dN4JSkfH 𝗝𝗼𝗶𝗻 𝗼𝘂𝗿 𝗪𝗵𝗮𝘁𝘀𝗔𝗽𝗽 𝗖𝗹𝗼𝘂𝗱 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆: https://lnkd.in/dTJfEFyK 𝗪𝗲𝗯𝘀𝗶𝘁𝗲: www.vyomanant.com #DevOps #GitHub #CICD #Docker #Kubernetes #CloudComputing #DevOpsEngineer #LearnDevOps #VyomanantAcademy #Vyomanant
To view or add a comment, sign in
-
-
I broke GitOps in one day — and it taught me more than 6 months of reading. Here's the honest story: —— 3 MISTAKES —— ❌ Applied Kubernetes manifests directly with kubectl and never committed to Git. ArgoCD couldn't find my k8s/ folder — it only existed on my machine, not in the repo. ❌ My ECR secret expired after 12 hours. Pods couldn't pull images at 2 am. Nobody was awake to fix it. Built a CronJob to refresh it automatically every 6 hours — problem solved permanently. ❌ The CronJob crashed with "forbidden" errors. The default service account had zero permissions. Had to build a full RBAC setup — ServiceAccount, Role, RoleBinding — to manage one secret. Least privilege isn't optional in production. —— 4 LESSONS —— • kubectl apply and git push are NOT the same thing — one updates the cluster, the other updates the truth • Git is the only door to production — not the console, not a hotfix, not a Slack message • RBAC protects teams from themselves — scoping permissions tightly makes systems safer and easier to audit for everyone • ArgoCD doesn't just deploy — it watches, detects drift and self-corrects automatically —— 3 REAL PROBLEMS THIS SOLVES —— → "Who changed that?" — Every change is a git commit. Audit trail is automatic. → "How do we roll back?" — revert the commit, ArgoCD reverts the cluster. → "The cluster drifted again" — selfHeal: true means ArgoCD corrects any manual change that bypasses Git. The result: 12 healthy resources — Deployment, Service, CronJob, RBAC — all synced, all green, all traceable to a git commit. No manual deployments. No 3 am surprises. No "it worked on my machine." This is the kind of infrastructure that makes teams move faster and sleep better. Prometheus and Grafana are next. 🔗 https://lnkd.in/e654B7F7 A self-hosted endpoint monitoring platform built end-to-end with Docker, GitHub Actions, Kubernetes, ECR and ArgoCD. Open source. Real pipeline. No fluff. #DevOps #GitOps #ArgoCD #Kubernetes #CICD #AWS #CloudEngineering #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 42 of #90DaysOfDevOps – Understanding GitHub Actions Runners Today I explored how GitHub Actions executes CI/CD jobs using runners. Every workflow job needs a machine to run on, and GitHub provides two types: • GitHub-hosted runners (managed by GitHub) • Self-hosted runners (managed by you) 🔹 What I implemented today ✅ Created a workflow with 3 parallel jobs running on different operating systems: ubuntu-latest windows-latest macos-latest Each job printed: OS name Runner hostname Current user running the job This helped me understand how GitHub dynamically provisions different environments for CI pipelines. 🔹 Explored Pre-installed Tools On the ubuntu-latest runner, I checked versions of: Docker Python Node.js Git One key learning: GitHub-hosted runners already include many developer tools, which speeds up CI pipelines because we don't need to install everything manually. 🔹 Self-Hosted Runner Setup (The fun part) I registered a self-hosted runner to my GitHub repository and configured it on my machine. Then I created a workflow that: Printed the hostname of my machine Displayed the working directory Created a file during the workflow run And yes… the file actually appeared on my machine after the workflow finished. 🤯 This means the CI pipeline was running directly on my own hardware instead of GitHub’s infrastructure. 🔹 Added Runner Labels I also added a custom label to my runner and updated the workflow to target it. This is very useful when managing multiple self-hosted runners in larger environments. 🔹 Key Takeaways • GitHub-hosted runners Managed by GitHub Free with usage limits Many tools already pre-installed Great for quick CI jobs and testing across OS environments • Self-hosted runners Managed by you or your team Runs on your own machine or cloud VM Full control over installed tools and environment Useful for heavy builds, custom setups, or long-running jobs Running CI pipelines on my own machine was definitely a cool DevOps moment today. 🔧 #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham
To view or add a comment, sign in
-
3 months ago, I had no idea what GitOps even meant. 🤷 Today, I'm running production deployments with ArgoCD on Kubernetes — and I'll never go back to manual kubectl apply again. Here's everything I learned 👇 ☸️ What is GitOps? Your Git repo IS the source of truth. If it's in Git → it's in your cluster. Period. No more "works on my machine" deployments. 🔁 How ArgoCD changed my workflow: → Push code to Git → ArgoCD detects the change automatically → Syncs it to the Kubernetes cluster → Done. Zero manual steps. 🔥 What I actually built: ✅ Set up ArgoCD on a K8s cluster from scratch ✅ Connected it to my GitHub repo ✅ Configured auto-sync so every push = live deployment ✅ Added health checks so broken deploys never go live ✅ Used Helm charts for cleaner app management 💡 The biggest mindset shift? Stopped thinking "I'll deploy from my terminal" Started thinking "I'll deploy by pushing to Git" That one shift makes your deployments: 📌 Auditable — every change is a Git commit 📌 Reversible — bad deploy? git revert and done 📌 Automated — no human in the loop 🚀 If you're learning DevOps and haven't tried ArgoCD yet — start today. It's genuinely one of the most satisfying tools to set up. GitHub: https://lnkd.in/dHCw_eWR #Kubernetes #ArgoCD #GitOps #DevOps #CloudEngineering #LearningInPublic #K8s #CI_CD
To view or add a comment, sign in
-
-
Excited to Share My GitHub Actions CI/CD Project! I’ve successfully built a complete end-to-end CI/CD pipeline using GitHub Actions — fully automated and powered by GitHub-hosted runners. 🔧 What I implemented: ✅ Automated build & test workflow ✅ Continuous Integration on every push ✅ Docker image build & optimization ✅ Secure image push to registry ✅ Continuous Deployment setup ✅ Environment-based configuration ✅ Fully running on GitHub’s own runners (no external setup needed) 💡 Key Highlights: Eliminated manual deployment steps Faster and reliable delivery pipeline Clean and scalable workflow design Production-ready CI/CD structure 🛠️ Tech Stack: GitHub Actions Docker Node.js / MERN Stack YAML workflows This project helped me understand how modern companies are shifting towards GitHub Actions for CI/CD automation, replacing traditional tools with a more integrated and developer-friendly approach. If you're learning DevOps, I highly recommend getting hands-on with GitHub Actions — it's powerful and industry-relevant. #DevOps #GitHubActions #CICD #Docker #Automation #Cloud #MERN #LearningInPublic #90DaysDevOps #TrainWithShubham
To view or add a comment, sign in
-
-
Day 14 – Running Your First Docker Container Day 14 of my 30-Day DevOps learning journey. Today I focused on running my first Docker container and understanding how applications run inside containers. What is a Docker Container? A Docker container is a lightweight, portable environment that includes: Application code Runtime System libraries Dependencies This ensures the application runs the same on any system. Steps to Run Your First Container 1. Pull an Image from Docker Hub Docker Hub is the public registry where Docker images are stored. docker pull nginx 2. Run the Container docker run -d -p 80:80 nginx Explanation: -d → Run container in background -p 80:80 → Map container port to host port Now the application runs inside a container. Check Running Containers docker ps This command lists all running containers. Stop a Container docker stop <container-id> Why Containers Matter in DevOps • Faster deployments • Consistent environments • Easy scaling • Works perfectly with CI/CD pipelines Containers make it easier to move applications from development → testing → production without compatibility issues. Tomorrow: Docker Images & Dockerfile – How containers are built. Do follow me for more content on DevOps. Please checkout my GitHub Repo and give your suggestion i have created basic projects - https://lnkd.in/gXTYxXXm A Special thanks to Shubham Londhe & Abhishek Veeramalla for the guidance and the tutorials. #DevOps #Docker #Containers #CICD #CloudComputing #AWS #Jenkins #Kubernetes #Linux #DevOpsEngineer #TechLearning
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development