Are we quietly trading one set of Kubernetes problems for another? GitOps with Git gave us something genuinely valuable: history, blame, diffs, and PR-based review as first-class delivery primitives. Now OCI is taking over as the transport layer for Kubernetes config, for good reasons: Content-addressability, immutability, and edge and airgap compatibility are real advantages that Git was never designed to provide. Instead of choosing between them, I built a way to have both: Kokumi. Kokumi is a Kubernetes config delivery tool built on the premise that you shouldn't have to choose. It models delivery as three distinct concerns: what should be built, what was built, and what is currently running. Rendering and deployment are fully decoupled, which means: 🔍 Inspect & edit: Full manifest review in a built-in UI before any cluster sees it. 📝 Diff & approve: Review a structured diff of every change before it's applied. ⏪ Instant rollback: Repoint to any previous artifact. Already built, already there. This is early, and I want to pressure-test the model with people who've actually felt this pain. If you're working through OCI delivery architecture: Where does the model break for your use case? What is missing? 👉 Repo link, would love your feedback: https://lnkd.in/d-p_mhjS #Kubernetes #GitOps #CloudNative #DevOps #PlatformEngineering
Robin Lieb’s Post
More Relevant Posts
-
🚀 End-to-End CI/CD Pipeline with Jenkins + Docker I recently built a complete CI/CD pipeline to automate the process from code commit to deployment. Sharing my learning along with the pipeline code 👇 🔹 What This Pipeline Does Pulls code from GitHub Builds Docker image Pushes image to Docker Hub Deploys container automatically Provides success/failure feedback 🔹 Jenkins Pipeline Code 👇 pipeline { agent any environment { IMAGE_NAME = "your-dockerhub-username/my-app" TAG = "${BUILD_NUMBER}" } stages { stage('Checkout Code') { steps { checkout scm } } stage('Build Docker Image') { steps { sh 'docker build -t $IMAGE_NAME:$TAG .' } } stage('Login to Docker Hub') { steps { withCredentials([usernamePassword( credentialsId: 'docker-creds', usernameVariable: 'DOCKER_USERNAME', passwordVariable: 'DOCKER_PASSWORD' )]) { sh 'echo $DOCKER_PASSWORD | docker login -u $DOCKER_USERNAME --password-stdin' } } } stage('Push Image') { steps { sh 'docker push $IMAGE_NAME:$TAG' } } stage('Deploy Container') { steps { sh ''' docker stop my-container || true docker rm my-container || true docker run -d -p 80:80 --name my-container $IMAGE_NAME:$TAG ''' } } } post { success { echo '✅ CI/CD Pipeline executed successfully!' } failure { echo '❌ Pipeline failed!' } } } 🔹 Pipeline Flow 🔄 👉 Code → Build → Test → Push → Deploy → Live 🚀 🔹 Key Learnings 💡 ✅ CI/CD automates the entire workflow ✅ Docker ensures consistency across environments ✅ Jenkins pipelines make deployments reliable ✅ Version tagging is important (avoid latest) 🔹 Pro Tip ⚡ Always use secure credentials (withCredentials) and avoid hardcoding sensitive data in pipelines. 🔹 Outcome 🚀 ✔️ Automated deployments ✔️ Faster delivery ✔️ Reduced manual effort ✔️ Production-ready workflow #DevOps #CICD #Jenkins #Docker #Automation #Cloud #LearningJourney
To view or add a comment, sign in
-
After working with tools like GitHub Actions and Jenkins, I was able to build CI/CD pipelines, automate deployments, and interact with Kubernetes clusters efficiently. Question arrises:- If CI/CD is already handling deployments, why do we need GitOps tools likeArgo CD? GitOps is a modern approach to continuous delivery (CD) for Kubernetes and cloud-native applications. It uses Git repositories as the single source of truth for declaring the desired state of your system. Instead of manually deploying or configuring applications, everything (apps, infrastructure, policies) is written as declarative code (YAML/JSON) and stored in Git. A GitOps operator (like ArgoCD) continuously ensures the cluster matches what’s in Git. ** DVAO Principle 1. Declarative The desired state of the system is described declaratively (e.g., YAML manifests). Example: A Deployment manifest defines how many replicas and which container image. 2. Versioned & Immutable Desired state is stored in Git (or another versioned system). Every change is auditable and traceable. 3. Automated A controller (like ArgoCD) continuously watches Git and the cluster. If there’s drift (difference), it automatically applies changes. 4. Observable System health and deployment status are visible. Git history + dashboards + alerts provide observability. Special thanks to Shubham Londhe for explaining GitOps so clearly!🚀🚀 #Cloud #Devops #GitOps #ArgoCD #CICD #Kubernetes #AWS
To view or add a comment, sign in
-
-
GitOps keeps coming up in Kubernetes discussions, job descriptions, and architecture reviews. But most explanations are vague or skip the important parts. Here is what it actually is: GitOps is a way of managing deployments where Git is the single source of truth. Every change goes through a Git commit. An agent inside your cluster watches the repo and applies changes automatically. The key shift — instead of CI/CD pushing into your cluster, the cluster pulls from Git. This means: → No cluster credentials in your CI/CD pipeline → Every deployment has an audit trail through git log → Rollback = git revert → Someone manually changes something? The agent reverts it → No more configuration drift By 2026, 91% of cloud-native organisations have adopted GitOps (CNCF Annual Survey). It is no longer optional for Kubernetes teams at scale. I wrote a plain-language explanation covering how it works, push vs pull models, GitOps vs CI/CD, when to use it, and which tools to start with. Full post: https://lnkd.in/gtMsA4jU #GitOps #Kubernetes #DevOps #CICD #ArgoCD #Flux #CloudNative #PlatformEngineering
To view or add a comment, sign in
-
Docker vs Kubernetes What is the difference Many beginners in DevOps get confused between Docker and Kubernetes. Here is a clear explanation with definition and basic commands. Docker Definition Docker is a containerization platform. It is used to build, package and run applications inside containers. A container includes application code, libraries and dependencies so it runs the same everywhere. Why we use Docker Docker removes environment issues. It helps developers run applications easily on any system. It is mainly used for development, testing and small deployments. Basic Docker Commands docker build -t appname . docker run -d -p 8080:80 appname docker ps docker stop container_id docker images Example If you have a Java or Node application, Docker will package it with all dependencies and run it in a container on your system. Kubernetes Definition Kubernetes is a container orchestration tool. It is used to manage, scale and automate deployment of containerized applications across multiple servers. Why we use Kubernetes When application traffic increases, one container is not enough. Kubernetes helps to run multiple containers, balance load, restart failed containers and scale automatically. Basic Kubernetes Commands kubectl get pods kubectl get nodes kubectl apply -f deployment.yaml kubectl describe pod pod_name kubectl delete pod pod_name Example If your application is running on many servers, Kubernetes manages all containers, distributes traffic and ensures high availability. Key Difference Docker works on single system and focuses on creating containers Kubernetes works on multiple systems and focuses on managing containers Simple Understanding Docker is used to create containers Kubernetes is used to manage containers at large scale Conclusion First learn Docker because it is the base of containerization. Then learn Kubernetes to handle real production workloads and scaling. This is one of the most important topics for DevOps interviews and real time projects. #Docker #Kubernetes #DevOps #AWS #Cloud #Containers
To view or add a comment, sign in
-
-
Docker vs Kubernetes What is the difference Many beginners in DevOps get confused between Docker and Kubernetes. Here is a clear explanation with definition and basic commands. Docker Definition Docker is a containerization platform. It is used to build, package and run applications inside containers. A container includes application code, libraries and dependencies so it runs the same everywhere. Why we use Docker Docker removes environment issues. It helps developers run applications easily on any system. It is mainly used for development, testing and small deployments. Basic Docker Commands docker build -t appname . docker run -d -p 8080:80 appname docker ps docker stop container_id docker images Example If you have a Java or Node application, Docker will package it with all dependencies and run it in a container on your system. Kubernetes Definition Kubernetes is a container orchestration tool. It is used to manage, scale and automate deployment of containerized applications across multiple servers. Why we use Kubernetes When application traffic increases, one container is not enough. Kubernetes helps to run multiple containers, balance load, restart failed containers and scale automatically. Basic Kubernetes Commands kubectl get pods kubectl get nodes kubectl apply -f deployment.yaml kubectl describe pod pod_name kubectl delete pod pod_name Example If your application is running on many servers, Kubernetes manages all containers, distributes traffic and ensures high availability. Key Difference Docker works on single system and focuses on creating containers Kubernetes works on multiple systems and focuses on managing containers Simple Understanding Docker is used to create containers Kubernetes is used to manage containers at large scale Conclusion First learn Docker because it is the base of containerization. Then learn Kubernetes to handle real production workloads and scaling. This is one of the most important topics for DevOps interviews and real time projects. #Docker #Kubernetes #DevOps #AWS #Cloud #Containers
To view or add a comment, sign in
-
-
A more important distinction is not just "single vs multi system", but responsibility: Docker solves packaging and isolation, while Kubernetes solves scheduling, service discovery, self-healing, and scaling. In production, they act on different layers of the stack.
DevOps Engineer ♾️ | AWS ☁️ | Linux 🐧 | Git & GitHub🧬 | Docker🐳 | Kubernetes☸️ | Jenkins🚀 | Terraform 💻 | CI/CD | Ansible📦 | Cloud Automation
Docker vs Kubernetes What is the difference Many beginners in DevOps get confused between Docker and Kubernetes. Here is a clear explanation with definition and basic commands. Docker Definition Docker is a containerization platform. It is used to build, package and run applications inside containers. A container includes application code, libraries and dependencies so it runs the same everywhere. Why we use Docker Docker removes environment issues. It helps developers run applications easily on any system. It is mainly used for development, testing and small deployments. Basic Docker Commands docker build -t appname . docker run -d -p 8080:80 appname docker ps docker stop container_id docker images Example If you have a Java or Node application, Docker will package it with all dependencies and run it in a container on your system. Kubernetes Definition Kubernetes is a container orchestration tool. It is used to manage, scale and automate deployment of containerized applications across multiple servers. Why we use Kubernetes When application traffic increases, one container is not enough. Kubernetes helps to run multiple containers, balance load, restart failed containers and scale automatically. Basic Kubernetes Commands kubectl get pods kubectl get nodes kubectl apply -f deployment.yaml kubectl describe pod pod_name kubectl delete pod pod_name Example If your application is running on many servers, Kubernetes manages all containers, distributes traffic and ensures high availability. Key Difference Docker works on single system and focuses on creating containers Kubernetes works on multiple systems and focuses on managing containers Simple Understanding Docker is used to create containers Kubernetes is used to manage containers at large scale Conclusion First learn Docker because it is the base of containerization. Then learn Kubernetes to handle real production workloads and scaling. This is one of the most important topics for DevOps interviews and real time projects. #Docker #Kubernetes #DevOps #AWS #Cloud #Containers
To view or add a comment, sign in
-
-
🚀 DevOps Journey — Day 5: The GitOps Leap 🐙 Yesterday I mastered the HPA control loop. Today, I removed myself from the deployment equation. I moved my laboratory from traditional Push-based CI/CD to a Declarative GitOps model using ArgoCD. 🔬 What changed? Until today, my GitHub Actions pipeline was responsible for "shouting" orders to the cluster (helm upgrade --install). If the connection failed or the runner had issues, the deployment broke. Now, the cluster has its own "brain". 🧠 How it works now CI Phase: GitHub Actions only builds the Docker image and pushes it to GHCR (versioned by SHA). CD Phase (The GitOps way): ArgoCD monitors my Helm charts in Git. Reconciliation: If I change a single line in Git (like increasing replicas), ArgoCD detects the "drift" and pulls the changes into the cluster. 🛡️ The "Self-Healing" Test I decided to play "Chaos Engineering": I manually deleted a Pod and a Service using kubectl. The result? In less than 5 seconds, ArgoCD detected the state didn't match Git and recreated everything automatically. The cluster is now "self-healing". It doesn't care what I do manually; it only obeys the Source of Truth: Git. 🛠️ The "WSL2 vs Networking" Battle It wasn't all easy. Running ArgoCD inside a k3d cluster on WSL2 brought some real-world troubleshooting: MTU issues: Network packets were too large for the WSL tunnel, causing timeouts with GitHub. Liveness Probes: In a local environment, ArgoCD's repo-server needed more "patience" (timeouts increased from 1s to 10s) to handle the load. Lesson: In production, networking and resource constraints are your real enemies. If you don't tune your probes and MTU, your "automated" system becomes a "restarting" system. 🧪 What the lab now demonstrates: ✔ GitOps Workflow: Decoupled CI and CD. ✔ Drift Detection: Absolute consistency between Git and Production. ✔ Manual Override Protection: The cluster reverts unauthorized changes. ✔ Infrastructure as Code (IaC): Everything, from the HPA to the ArgoCD app, is defined as code. This isn't just a deployment anymore. It's an Operating Model. 🧭 Next stop: Making the entire cluster reproducible with Terraform. Building production-style systems in public. From "Push" to "Pull". One reconciliation loop at a time. https://lnkd.in/dPdqK99h #DevOps #Kubernetes #GitOps #ArgoCD #CloudEngineering #PlatformEngineering #SRE #BuildingInPublic #WSL2
To view or add a comment, sign in
-
If you are working with Git in DevOps, you need to understand Git LFS. Here is why 👇 There are use cases where you might need to push a large file to git. For example, - ML models - Datasets etc. But Git was designed for text files, not binaries. Every change stores the entire file again, which bloats your repository fast. GitHub also has a hard limit of 100MB per file. Also, large files make git clone slow and git pull heavy. This directly impacts your CI pipeline performance. This is exactly where Git Large File Storage (LFS) helps you. It is an open source Git extension for versioning large files. Instead of storing large files directly in your repo, Git LFS replaces them with a small pointer file. That pointer file contains the LFS version, a SHA-256 hash of the actual file, and the file size. The real file is stored separately in Git LFS storage, not inside your Git repository. So where does it actually store the file? If you are using GitHub, the large file is stored in GitHub’s managed storage, separate from your Git objects. If you're self-hosting, it can be stored in your own LFS server or object storage like S3. Tools like DVC use similar workflow for versioning data in MLOPS use cases. --- 19000+ engineers read our DevOps/MLOPS newsletter. 𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 (𝗶𝘁’𝘀 𝗳𝗿𝗲𝗲): https://lnkd.in/guFma5_V #mlops
To view or add a comment, sign in
-
-
Git works great until you introduce data. ML models and datasets quickly expose its limits: ⚙️large files ⚙️slow clones ⚙️and bloated repos. Git LFS fixes this by decoupling storage from version control. Simple concept, but increasingly critical in modern DevOps and MLOps workflows. Came across a great breakdown of this - worth a quick read if you're working with CI/CD or ML pipelines 👇
If you are working with Git in DevOps, you need to understand Git LFS. Here is why 👇 There are use cases where you might need to push a large file to git. For example, - ML models - Datasets etc. But Git was designed for text files, not binaries. Every change stores the entire file again, which bloats your repository fast. GitHub also has a hard limit of 100MB per file. Also, large files make git clone slow and git pull heavy. This directly impacts your CI pipeline performance. This is exactly where Git Large File Storage (LFS) helps you. It is an open source Git extension for versioning large files. Instead of storing large files directly in your repo, Git LFS replaces them with a small pointer file. That pointer file contains the LFS version, a SHA-256 hash of the actual file, and the file size. The real file is stored separately in Git LFS storage, not inside your Git repository. So where does it actually store the file? If you are using GitHub, the large file is stored in GitHub’s managed storage, separate from your Git objects. If you're self-hosting, it can be stored in your own LFS server or object storage like S3. Tools like DVC use similar workflow for versioning data in MLOPS use cases. --- 19000+ engineers read our DevOps/MLOPS newsletter. 𝗦𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 (𝗶𝘁’𝘀 𝗳𝗿𝗲𝗲): https://lnkd.in/guFma5_V #mlops
To view or add a comment, sign in
-
-
🚀 Navigating the vast API landscape just got easier! The `public-apis/public-apis` GitHub directory is a game-changer for developers and DevOps engineers alike. My latest blog post dives deep into this community-curated goldmine: * Discovering APIs effortlessly 🕵️♀️ * Automating validation with CI/CD ⚙️ * Comparing it with alternatives & best practices for contribution! Unlock new possibilities for your projects. Check it out! 👇 #DevOps #APIs #GitHub #OpenSource Read here: https://lnkd.in/gqc2WNBx
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development