I just wrapped up an Introduction to Kubernetes course from The Linux Foundation and decided to reinforce it with a hands-on project. Instead of stopping at theory, I built and deployed a simple web application end-to-end using Kubernetes. Here’s what I implemented: 🔹Containerized my app using Docker 🔹Deployed it on a local Kubernetes cluster with Minikube 🔹Created a Deployment to manage replicas and enable self-healing 🔹Exposed the application using a Service (NodePort) 🔹Externalized configuration using ConfigMaps and Secrets 🔹Implemented liveness and readiness probes for reliability 🔹Practiced scaling and rolling updates What stood out to me was how Kubernetes shifts you from manually running containers to defining a desired state and letting the system enforce it. Watching Pods automatically restart and scale based on configuration made that concept very real. Github repo: https://lnkd.in/dTPTKjSV Next, I’m continuing with the Kubernetes and Cloud Native Essentials to deepen my understanding of cloud-native systems and how modern applications are designed and operated. #Kubernetes #DevOps #CloudComputing #Docker #LearningJourney
Kubernetes Hands-On Project with Docker and Minikube
More Relevant Posts
-
🚀 Just built and deployed a full CI/CD Pipeline from scratch! Here's what I implemented: ✅ Launched AWS EC2 (Ubuntu) and configured Security Groups ✅ Installed and configured Jenkins for automated builds ✅ Containerized a Node.js app using Docker ✅ Connected GitHub Webhooks — every git push triggers an automatic build & deploy ✅ Built a 4-stage Jenkinsfile: Checkout → Build → Deploy → Health Check The best part? One git push is all it takes to go from code to live app — automatically. That's the power of DevOps! 💡 This project gave me hands-on experience with real-world tools used in production environments every day. GitHub: github.com/arunak-11 #DevOps #AWS #Jenkins #Docker #CICD #CloudComputing #Linux #GitHub #LearningByDoing
To view or add a comment, sign in
-
Developer pushed broken code to production… 🚨 …and nothing went down 😳 ❌ No outage ❌ No panic ❌ No rollback scramble Because the deployment didn’t trust the build — it verified it first. 🛡️ A simple rule changed everything: “If it’s not healthy, it doesn’t go live.” Now every release runs in isolation, passes health checks, and only then gets promoted to production. Broken builds get blocked before they ever reach users. This turned CI/CD into a self-protecting system that quietly prevents failures instead of reacting to them. ⚙️ Full breakdown (Docker + GitHub Actions setup) here 👇 https://lnkd.in/dy_gw6T5 #DevOps #CI_CD #Docker #GitHubActions #CloudComputing #SRE #SiteReliabilityEngineering #PlatformEngineering #Microservices #SoftwareEngineering #Automation #SystemDesign #CloudNative #Linux #AWS #TechLeadership #DevOpsEngineering #ZeroDowntimeDeployment
To view or add a comment, sign in
-
-
github[.]com/kubernetes/kubernetes vs github[.]com/kubernetes/minikube Why do I need to use minikube if I can setup kubernetes in a single node? So, why does Minikube even exist? 💻 1. Cross-Platform Reality: Core Kubernetes components are deeply tied to the Linux kernel. If you are coding on a Mac or Windows machine, you can't natively run K8s. Minikube automatically handles the virtualization for you (via Docker, Hyperkit, WSL2, etc.) under the hood. 🗑️ 2. The "Disposable" Sandbox: Developers break clusters. If you manually install K8s on a node, tearing it down means purging packages, resetting network rules, and wiping `etcd` data. With Minikube? Just type `minikube delete` and your host machine is spotless. `minikube start` gets you a fresh cluster in 60 seconds. ⚙️ 3. Zero Friction Configuration: Vanilla K8s requires you to manually configure container runtimes, disable swap space, and install a CNI (like Calico or Flannel) just so pods can talk to each other. Minikube abstracts all of this into a single click/command. 🛠️ 4. Built-in Developer Tools: A raw K8s cluster is incredibly barebones. Minikube includes an "addons" system tailored for devs. Need an Ingress controller? `minikube addons enable ingress`. Need to simulate a cloud LoadBalancer? `minikube tunnel`. 🔄 5. Instant Version Switching: Need to test if your app works on an older K8s version? Just run `minikube start --kubernetes-version=v1.34.0`. Doing this on a bare-metal node requires manually downgrading binary packages and configs. The Verdict: 👉 Building a homelab, edge server, or studying for the CKA exam? Build a single-node cluster from scratch. 👉 Writing code, testing YAMLs, or working on Mac/Windows? Stick to Minikube (or `kind`, or Docker Desktop). #Kubernetes #DevOps #Minikube #CloudNative #SoftwareEngineering #PlatformEngineering #TechTips
To view or add a comment, sign in
-
𝗘𝘃𝗲𝗿 𝘄𝗼𝗻𝗱𝗲𝗿𝗲𝗱 𝘄𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗵𝗮𝗽𝗽𝗲𝗻𝘀 𝗯𝗲𝗵𝗶𝗻𝗱 𝘁𝗵𝗲 𝘀𝗰𝗲𝗻𝗲𝘀 𝘄𝗵𝗲𝗻 𝘆𝗼𝘂 𝗿𝘂𝗻 𝗮 𝘀𝗶𝗺𝗽𝗹𝗲 𝗰𝗼𝗺𝗺𝗮𝗻𝗱 𝗹𝗶𝗸𝗲 𝗱𝗼𝗰𝗸𝗲𝗿 𝗿𝘂𝗻? 🤔 𝗦𝗼 𝗹𝗲𝘁’𝘀 𝗱𝗶𝘀𝗰𝘂𝘀𝘀 - 𝗛𝗼𝘄 𝗗𝗼𝗰𝗸𝗲𝗿 𝗘𝗻𝗴𝗶𝗻𝗲 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝘀 𝘄𝗶𝘁𝗵 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 (𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗲𝗱 𝗦𝗶𝗺𝗽𝗹𝘆) 𝟭. 𝗗𝗼𝗰𝗸𝗲𝗿 𝗖𝗟𝗜 → 𝗗𝗼𝗰𝗸𝗲𝗿 𝗘𝗻𝗴𝗶𝗻𝗲 When you execute a command like docker run, it is sent to the Docker Engine via the Docker CLI. The request is received by the Docker Daemon (dockerd), which acts as the brain of Docker. 𝟮. 𝗗𝗼𝗰𝗸𝗲𝗿 𝗗𝗮𝗲𝗺𝗼𝗻 𝗛𝗮𝗻𝗱𝗹𝗲𝘀 𝗛𝗶𝗴𝗵-𝗟𝗲𝘃𝗲𝗹 𝗧𝗮𝘀𝗸𝘀 The daemon takes care of things like: ✔️ Image management ✔️ Network setup ✔️ Volume handling Once everything is prepared, it forwards the execution request. 𝟯. 𝗗𝗼𝗰𝗸𝗲𝗿 𝗘𝗻𝗴𝗶𝗻𝗲 → 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗱 Docker uses containerd as its high-level runtime. Communication happens via gRPC, and containerd is responsible for managing the entire container lifecycle — from pulling images to monitoring container health. 𝟰. 𝗰𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝗱 → 𝗦𝗵𝗶𝗺 𝗣𝗿𝗼𝗰𝗲𝘀𝘀 For each container, containerd creates a shim process. This shim ensures: ✔️ The container keeps running even if containerd restarts ✔️ STDIN/STDOUT streams remain active 𝟱. 𝗦𝗵𝗶𝗺 → 𝗿𝘂𝗻𝗰 (𝗟𝗼𝘄-𝗟𝗲𝘃𝗲𝗹 𝗥𝘂𝗻𝘁𝗶𝗺𝗲) The shim then invokes runc, which is the low-level runtime. runc is OCI-compliant and directly interacts with the Linux kernel. 𝟲. 𝗿𝘂𝗻𝗰 → 𝗟𝗶𝗻𝘂𝘅 𝗞𝗲𝗿𝗻𝗲𝗹 Finally, runc creates: ✔️ Namespaces (for isolation) ✔️ Cgroups (for resource management) And your container is up and running 🎉 💡 𝗘𝗻𝗱-𝘁𝗼-𝗘𝗻𝗱 𝗙𝗹𝗼𝘄: User Command → Docker CLI → Docker Daemon → containerd → Shim → runc → Linux Kernel #DevOps #Linux #Docker #containerd #runc
To view or add a comment, sign in
-
-
HTTPS setup done… but Nginx wouldn’t start. Day 15 of #100DaysOfDevOps ✅ Today’s task was to configure Nginx with SSL/TLS using pre-provided certificates. The setup looked straightforward place the certs in the right directories and configure the server to listen on port 443 with HTTP/2. But Nginx kept failing to start. The issue? A small syntax mistake in the config file. Running nginx -t quickly pointed out the error and saved a lot of debugging time. Key takeaway: A missing closing } in the http block causes Nginx to fail at startup with a configuration error. Always run nginx -t first it catches syntax errors before they take down the service. Day 15 done. 85 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Nginx #SSL #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
To view or add a comment, sign in
-
From "Just Install Helm CLI" to Understanding Helm in Production Installed Helm. Homarr Chart deployed. Something was wrong but I just didn't know what yet. That's the trap. Helm looks simple until production shows you it isn't. Coming from Flux + Kustomize, I kept reaching for explicit manifests that weren't there. Then I stopped using Helm CLI as my primary interface and switched to HelmRepository + HelmRelease and suddenly it clicked to me. Helm isn't a YAML renderer. It's a parameterized abstraction layer over Kubernetes primitives. Kustomize gives you static manifests you can reason about line by line. Helm gives you configurable, reusable code driven by values.yaml. Both valid. Very different approaches. In GitOps, declarative HelmRelease resources should be the default, not helm install from a terminal. I have in place a tool: - that stays there forever - can be used for deployment of other apps - speed up the deploymeynt process The achievement is that I have now accessbile my second website Homarr. :-) Sometimes moving fast means stopping, reading the docs and using the tool the way it was designed. #Kubernetes #Helm #Homarr #DevOps #Linux #GitOps #Yaml
To view or add a comment, sign in
-
-
🚀 Just published my blog on building a complete CI/CD pipeline with Jenkins and a Node.js app!(Part-1) If you've ever wondered how companies ship code multiple times a day without breaking things — this is exactly how. Here's what the blog covers: ✅ What is SDLC and where CI/CD fits in ✅ What Jenkins is and what problem it actually solves ✅ Building a real pipeline — Checkout → Build → Push to AWS ECR ✅ Handling secrets the right way using Jenkins Credentials ✅ Common issues you'll actually face and how to fix them Whether you're just getting started with DevOps or looking to set up your first pipeline this one's for you. 👇 Interested readers can find the full blog at the link below: 👇 🔗https://lnkd.in/g-zbSDwi #DevOps #Jenkins #CICD #Docker #AWS #ECR #CloudComputing #Linux #Medium
To view or add a comment, sign in
-
𝗗𝗮𝘆 𝟯𝟵 𝗼𝗳 𝗺𝘆 𝗗𝗲𝘃𝗢𝗽𝘀 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 💻 Preserving container changes — created a Docker image from a running container for backup and reuse 🐳 𝗧𝗮𝘀𝗸: Create a Docker Image from Container 𝗪𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • How to convert a running container into a Docker image • Usage of `docker commit` command • Difference between container state and image layers • When to use commit vs Dockerfile • Importance of backing up container changes 𝗪𝗵𝗮𝘁 𝗜 𝗯𝘂𝗶𝗹𝘁 / 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱: • Connected to Application Server 3 (`stapp03`) • Verified running container `ubuntu_latest` • Created image using `docker commit ubuntu_latest media:xfusion` • Verified image creation using `docker images` • Confirmed image is available for reuse 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀: • Understanding when to use `docker commit` • Differentiating between image build vs commit • Ensuring correct container is used 𝗙𝗶𝘅 / 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: • Learned that `docker commit` captures current container state • Understood it’s useful for quick backups and testing • Realized Dockerfile is better for production workflows • Gained clarity on image creation approaches 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: `docker commit` is powerful for quick snapshots — but for production, reproducibility always wins with Dockerfiles. This felt like capturing a live system state into a reusable image 🚀 Do you prefer using `docker commit` for quick saves, or always rely on Dockerfiles? #Day39 #DevOps #Docker #Containerization #Linux #Automation #CloudComputing #AWS #DevOpsJourney #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
🐳 Docker Essential Training — Engine v29 Hands-on Lab I just published my first guide from the Essential Training Series — a practical, hands-on Docker lab that I personally tested command by command before sharing. 📋 What's inside: • 8 exercises covering the core Docker fundamentals • Images, containers, networking, volumes • Multi-stage Dockerfiles & Docker Compose • Every issue I faced during testing is already fixed 🔧 Tested on: • Docker Engine v29 (latest — March 2026) • WSL2 on Windows — HP i3, 20GB RAM • Real commands, real errors, real fixes This is not a copy-paste from the docs. Every exercise was validated hands-on before being shared. 📎 PDF attached — free to download and use. 📁 Exercise files on GitHub: https://lnkd.in/dBpAXt7h 🚀 More coming soon: ✅ Docker (this one) ✅ Docker Compose ✅ K3s ⏳ Helm ⏳ GitHub Actions CI/CD ⏳ ArgoCD (GitOps) ⏳ Prometheus & Grafana ⏳ Loki + OpenTelemetry If this is useful — share it with someone learning DevOps 🙌 If it is not — drop a comment or send me your feedback. I am still improving it. #Docker #DevOps #Kubernetes #CloudNative #Linux #LearningInPublic #OpenSource
To view or add a comment, sign in
-
Load balancer was working… but still showing Nginx error page. Day 16 of #100DaysOfDevOps ✅ Today’s task was to configure Nginx as a load balancer to distribute traffic across three app servers. Everything looked correct Nginx was running, config was applied but requests were not reaching the backend. The issue? The upstream block was pointing to the wrong port. Apache on the backend servers was running on port 6400, but Nginx was forwarding traffic to the default port 80. Once the correct port was configured, everything worked as expected. Key takeaway: In load balancing, even a small mismatch between frontend and backend configuration can break the entire flow. Day 16 complete. 84 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Nginx #LoadBalancing #Networking #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
To view or add a comment, sign in
Explore related topics
- How to Deploy Data Systems with Kubernetes
- Kubernetes and Application Reliability Myths
- Kubernetes Deployment Tactics
- Kubernetes Deployment Skills for DevOps Engineers
- Simplifying Kubernetes Deployment for Developers
- Kubernetes Architecture Layers and Components
- Why Use Kubernetes for Digital Service Deployment
- Kubernetes Cluster Setup for Development Teams
- Kubernetes Scheduling Explained for Developers
- Kubernetes Deployment Strategies on Google Cloud
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development