🚀 DevOps in Action: Kubernetes Pod Monitoring with Shell Script In real-world DevOps, it’s not just about deployments — it’s about visibility, reliability, and quick recovery. Here’s a simple shell script I use to monitor Kubernetes pods and detect issues early 👇 #!/bin/bash set -euo pipefail NAMESPACE="default" LOGFILE="k8s_monitor.log" log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE" } # Get pods not in Running or Completed state pods=$(kubectl get pods -n "$NAMESPACE" --no-headers | awk '$3!="Running" && $3!="Completed" {print $1}') if [ -z "$pods" ]; then log "All pods are running fine ✅" else log "Found problematic pods ❌" for pod in $pods; do log "Checking pod: $pod" kubectl describe pod "$pod" -n "$NAMESPACE" | grep -i "error" | tee -a "$LOGFILE" done fi 💡 What this script demonstrates: ✅ Kubernetes monitoring using kubectl ✅ Log parsing with awk and grep ✅ Error detection for non-running pods ✅ Timestamp-based logging ✅ Production-ready practices (set -euo pipefail) 🔁 CI/CD Tip: You can integrate this script into a Jenkins/GitHub Actions pipeline step to fail deployments if pods are unhealthy. 👉 That’s how you move from simple scripting to proactive infrastructure monitoring. How do you monitor your Kubernetes workloads? 👇 #DevOps #Kubernetes #ShellScripting #CICD #SRE #Cloud #Monitoring
Kubernetes Pod Monitoring with Shell Script
More Relevant Posts
-
🚀 DevOps Journey — Day 5: The GitOps Leap 🐙 Yesterday I mastered the HPA control loop. Today, I removed myself from the deployment equation. I moved my laboratory from traditional Push-based CI/CD to a Declarative GitOps model using ArgoCD. 🔬 What changed? Until today, my GitHub Actions pipeline was responsible for "shouting" orders to the cluster (helm upgrade --install). If the connection failed or the runner had issues, the deployment broke. Now, the cluster has its own "brain". 🧠 How it works now CI Phase: GitHub Actions only builds the Docker image and pushes it to GHCR (versioned by SHA). CD Phase (The GitOps way): ArgoCD monitors my Helm charts in Git. Reconciliation: If I change a single line in Git (like increasing replicas), ArgoCD detects the "drift" and pulls the changes into the cluster. 🛡️ The "Self-Healing" Test I decided to play "Chaos Engineering": I manually deleted a Pod and a Service using kubectl. The result? In less than 5 seconds, ArgoCD detected the state didn't match Git and recreated everything automatically. The cluster is now "self-healing". It doesn't care what I do manually; it only obeys the Source of Truth: Git. 🛠️ The "WSL2 vs Networking" Battle It wasn't all easy. Running ArgoCD inside a k3d cluster on WSL2 brought some real-world troubleshooting: MTU issues: Network packets were too large for the WSL tunnel, causing timeouts with GitHub. Liveness Probes: In a local environment, ArgoCD's repo-server needed more "patience" (timeouts increased from 1s to 10s) to handle the load. Lesson: In production, networking and resource constraints are your real enemies. If you don't tune your probes and MTU, your "automated" system becomes a "restarting" system. 🧪 What the lab now demonstrates: ✔ GitOps Workflow: Decoupled CI and CD. ✔ Drift Detection: Absolute consistency between Git and Production. ✔ Manual Override Protection: The cluster reverts unauthorized changes. ✔ Infrastructure as Code (IaC): Everything, from the HPA to the ArgoCD app, is defined as code. This isn't just a deployment anymore. It's an Operating Model. 🧭 Next stop: Making the entire cluster reproducible with Terraform. Building production-style systems in public. From "Push" to "Pull". One reconciliation loop at a time. https://lnkd.in/dPdqK99h #DevOps #Kubernetes #GitOps #ArgoCD #CloudEngineering #PlatformEngineering #SRE #BuildingInPublic #WSL2
To view or add a comment, sign in
-
GitOps-Based DevOps Pipeline with Kubernetes, ArgoCD & CI/CD I’m excited to share my recent project — Civic Issue Reporter, a full-stack application designed to report and track civic/environmental issues, where I implemented a complete end-to-end DevOps workflow using modern cloud-native tools. 🔹 Project Overview The goal of this project was not just to build an application, but to design a production-like deployment system where everything is automated — from code changes to deployment on Kubernetes. 🔹 What I Built • Containerized the frontend and backend using Docker with optimized multi-stage builds • Deployed the application on Azure Kubernetes Service (AKS) using Kubernetes Deployments and Services • Packaged the application using Helm, enabling reusable and environment-based configurations • Implemented GitOps using ArgoCD, where the cluster state is automatically synchronized with the Git repository • Built a CI pipeline using GitHub Actions to build Docker images and push them to Azure Container Registry (ACR) 🔹 End-to-End Workflow 👉 Git Push → GitHub Actions (CI) → ACR → ArgoCD (CD) → Kubernetes Deployment 🚀 This means every code change is automatically built, pushed, and deployed — without manual intervention. 🔹 Key Learning One of the most valuable parts of this project was solving real-world issues like: Handling Azure public IP limits Designing services using ClusterIP for better architecture Understanding how GitOps ensures consistency between desired and actual state 🔹 GitOps in Action For example, when I update the replicas count in values.yaml from 2 to 3 and push the change, ArgoCD detects it automatically, syncs the application, and Kubernetes scales the pods — fully automated. 📂 GitHub Repository: https://lnkd.in/gTHSPmeG This project gave me a strong understanding of how real-world DevOps systems are built using Docker, Kubernetes, Helm, CI/CD, and GitOps. I’m excited to keep learning and building more in the DevOps and cloud space 🚀 #DevOps #Kubernetes #GitOps #ArgoCD #Docker #CICD #Azure #CloudComputing #GitHubProjects #OpenToWork
To view or add a comment, sign in
-
🔧 Lab Title: 5 - Main kubectl commands 🚀 Project Steps PDF Your Easy-to-Follow Guide :https://lnkd.in/gRdiBmaz 🔗 GitLab Repo Code:https:https://lnkd.in/gKfqqhiv 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I worked on managing Kubernetes deployments using kubectl. I listed active deployments and pods, deleted MongoDB and NGINX deployments, verified resource cleanup, created a new NGINX deployment with a YAML manifest, and scaled it from 1 to 2 replicas. This reinforced declarative management of workloads and dynamic scaling in Kubernetes. Tools Used: kubectl: Manage and inspect Kubernetes resources via CLI. YAML: Define Kubernetes deployments declaratively. Skills Gained: • Inspect deployments, pods, and replicasets to monitor cluster state 🔍 • Delete deployments and confirm cleanup of related pods and replicasets 🧹 • Create declarative YAML manifests for Kubernetes workloads 📄 • Apply and update deployments using kubectl apply for scaling 🛠️ Challenges Faced: • Ensuring complete cleanup of replicasets after deployment deletion. • Correctly editing YAML files to update replica counts. Why It Matters: This lab strengthens core Kubernetes skills: declarative resource management, deployment lifecycle control, and workload scaling. Mastery of these practices is essential for effective cluster operation and application reliability in production environments. 📌 hashtag#Kubernetes hashtag#kubectl hashtag#YAML hashtag#Deployments hashtag#Scaling hashtag#CloudNative hashtag#DevOps 🚀 Next: 6 - YAML Configuration File!
To view or add a comment, sign in
-
-
🚀 Built a production-style CI/CD + GitOps pipeline for microservices Over the past few weeks, I focused on designing a cloud-native delivery setup similar to what modern DevOps teams use in production — using GitHub Actions, AWS, Kubernetes, and ArgoCD. Engineered a production-ready system utilising industry-standard architectural patterns to ensure scalability, observability, and long-term maintainability. Here’s what I put together: - Two independent microservices (User & Order) with their own CI pipelines - Automated build and test workflows using GitHub Actions - Container images built using Kaniko and pushed to AWS ECR - A separate GitOps repository to manage Kubernetes manifests - ArgoCD handling deployments using a pull-based model (no direct CI → cluster access) - Deployment flow structured so changes are driven entirely from Git One focus was on removing manual steps. Earlier, deployments involved multiple commands and manual updates. Now, a simple commit triggers the full flow — build, push, manifest update, and deployment. Everything is version-controlled, traceable, and repeatable. Architecture flow looks like this: Code → CI (build & test) → image pushed to ECR → GitOps repo updated → ArgoCD sync → Kubernetes deployment How does the Implementation look like: - GitOps makes deployments predictable — Git becomes the single source of truth - Separating CI and CD avoids tight coupling and improves control - Microservices need independent pipelines to avoid breaking everything at once - Automating the pipeline removes human error more than it saves time 🔗 Repositories: User Service: https://lnkd.in/dXu_zP38 Order Service: https://lnkd.in/dzBqyeYW GitOps Manifests: https://lnkd.in/dkjvXXxJ --- This setup is still evolving, but it gives a clear understanding of how production DevOps systems are actually designed — beyond just running pipelines. #DevOps #GitOps #Kubernetes #ArgoCD #AWS #CI_CD #CloudNative #Microservices
To view or add a comment, sign in
-
-
🚀 Kubernetes Hands-On Learning – From Beginner to Advanced (Real DevOps Practice) Just completed a complete Kubernetes hands-on assignment series covering everything from basics to production-level concepts — and this is what stood out 👇 📘 Source: ⸻ 🔥 What I Practiced & Learned ✅ Pods & kubectl Fundamentals • Created and managed my first Pod (nginx) • Understood declarative YAML vs imperative commands • Explored kubectl describe, logs, exec ✅ Deployments & Self-Healing • Built Deployment with 3 replicas • Observed auto-recovery when Pods fail • Performed Rolling Updates & Rollbacks (Zero Downtime) ✅ Services & Networking • Implemented ClusterIP (internal access) • Exposed apps using NodePort (external access) • Learned load balancing via label selectors ✅ ConfigMaps & Secrets (Production MUST) • Externalized configs using ConfigMaps • Secured sensitive data with Secrets • Understood why hardcoding config = bad practice ✅ Namespaces, Resource Limits & RBAC • Created dev & prod environments • Applied ResourceQuota & LimitRange • Implemented RBAC (least privilege access) ✅ Advanced – Real Production Setup • Deployed StatefulSet (PostgreSQL) • Configured PersistentVolumeClaims (storage) • Used Ingress for routing (path-based) • Enabled Horizontal Pod Autoscaler (HPA) ⸻ 💡 Key Takeaways (No BS) • Kubernetes is NOT about commands → it’s about desired state management • Self-healing + scaling + automation = real power • If you’re not using ConfigMaps, Secrets, RBAC → you’re not production-ready • Deployments are easy… running stateful apps correctly is the real test • Zero downtime deployments are expected, not optional ⸻ ⚠️ Common Mistakes I Faced (And Fixed) • ❌ Image pull errors → wrong image/tag • ❌ Pods stuck in Pending → resource issues • ❌ Misconfigured Services → no traffic routing • ❌ Secrets misunderstanding → base64 ≠ encryption ⸻ 🎯 What’s Next? Moving towards: 👉 Helm + GitOps (ArgoCD) 👉 Production-grade CI/CD pipelines 👉 Observability (Prometheus + Grafana) ⸻ 💬 If you’re learning Kubernetes and only watching tutorials without doing hands-on… you’re wasting your time. Build. Break. Debug. Repeat. That’s the only way. #Kubernetes #DevOps #Cloud #K8s #Docker #LearningByDoing #SRE #InfrastructureAsCode #TechGrowth
To view or add a comment, sign in
-
🚀 Excited to share my DevOps journey: From concept to production-grade deployment! I've been building and deploying BudgetBuddy — a full-stack expense tracking application — with a complete DevOps pipeline that handles real-world production challenges. Application: https://lnkd.in/gbfmBziy The Stack: 🖥️ Frontend: React with Tailwind CSS (multi-stage Docker build) 🔙 Backend: Node.js/Express with MongoDB ☁️ Infrastructure: AWS EC2 (Terraform IaC), Ansible configuration management 🐳 Containerization: Docker & Docker Compose with optimized multi-stage builds 🔄 CI/CD: Jenkins pipeline with GitHub webhook automation (ngrok tunnel) 📊 Monitoring: Prometheus + Grafana + Node Exporter stack Key Achievements: ✅ Resolved production issues: disk exhaustion handling, stale image deployments, secret management ✅ Implemented immutable image tagging using Git SHA for reliable rollback ✅ Automated Docker cleanup and pre-flight disk checks in deployment pipeline ✅ Built observable infrastructure with real-time metrics (CPU, memory, disk, network, uptime) ✅ Created production-ready Dockerfiles with security best practices (non-root users, minimal layers) ✅ Orchestrated multi-service deployments with health checks and validation Real-World Learnings: This project taught me how to handle the messy reality of DevOps — disk pressure, image caching issues, secret handling, and observability. Every fix became a lesson in building resilient systems at scale. Tools & Technologies: Terraform | Ansible | Docker | Jenkins | Kubernetes | Grafana | Prometheus | AWS | Node.js | React | MongoDB Looking forward to applying these DevOps practices in production environments and continuing to build observability-first infrastructure! 🎯 #DevOps #Docker #CI/CD #Terraform #Kubernetes #CloudEngineering #AWS #Jenkins #Monitoring
To view or add a comment, sign in
-
Project Showcase: Azure Incident Monitoring Dashboard (End-to-End CI/CD + GitOps). I recently built and deployed an Incident Monitoring Dashboard using a fully automated CI/CD and GitOps workflow on Azure — designed to mirror real-world DevOps practices. Tech Stack Azure DevOps (CI/CD Pipelines) Azure Kubernetes Service (AKS) Azure Container Registry (ACR) Argo CD (GitOps Deployment) Docker & Kubernetes How it works * Code is pushed to GitHub. * Azure DevOps pipeline builds and pushes Docker image to ACR. * Pipeline updates the Kubernetes manifests repository. * Argo CD detects changes and automatically syncs. * Application is deployed to AKS with zero manual intervention. Key Highlights * Fully automated deployment pipeline. * Git as the single source of truth. * Real-time application updates via GitOps. * Scalable and production-aligned architecture. This project helped me deepen my understanding of: Separating build and deployment concerns. Implementing GitOps in a real environment. Managing Kubernetes deployments at scale. Next steps: Adding monitoring, security scanning (Trivy), and observability (Prometheus/Grafana) to make this even more production-ready. #DevOps #Azure #AKS #GitOps #ArgoCD #CloudEngineering #CI_CD #Kubernetes #Docker
To view or add a comment, sign in
-
🔧 Lab Title: 24 - Demo project: Deploy Microservices with Helmfile 🚀 Project Steps PDF Your Easy-to-Follow Guide:https://lnkd.in/gVGaXYRD 🔗 GitLab Repo Code:https:https://lnkd.in/g8dcu7yz 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I automated the deployment and cleanup of multiple Kubernetes microservices using Helm, shell scripts, and Helmfile. I explored Helm chart management, declarative deployments, and Kubernetes resource verification. This lab focused on streamlining multi-service deployment with automation for faster, error-free CI/CD pipelines. ⚙️📦 Tools Used: Helm: For packaging and deploying microservices. Shell scripting (bash): Automated install/uninstall commands. Helmfile: Managed multiple Helm releases declaratively. kubectl: Verified pod and service statuses. Skills Gained: 🚀 Automated multi-service Helm deployments with shell scripts. 🗂️ Used Helmfile for centralized release management. 🔍 Verified and troubleshot Kubernetes deployments efficiently. Challenges Faced: 🔐 Setting correct script permissions for automation. ⚙️ Managing Helm values and overrides in Helmfile. 🧹 Creating reliable uninstall scripts to keep cluster clean. Why It Matters: This lab teaches key DevOps automation skills, showing how Helm, scripting, and Helmfile simplify Kubernetes microservice management. Mastering these tools enables faster, consistent, and scalable deployments—essential for modern cloud-native DevOps roles. 🌐🔥 📌 hashtag#DevOps hashtag#CI_CD hashtag#Automation hashtag#Kubernetes hashtag#Helm hashtag#Helmfile hashtag#CloudNative 🚀 Stay tuned! Next: Project 11 - Kubernetes on AWS - EKS 🔥
To view or add a comment, sign in
-
-
🚀 Docker Notes for DevOps Engineers (Beginner to Pro) In modern DevOps, Docker plays a crucial role in building, shipping, and running applications efficiently. Here are my key learnings and notes 👇 🔹 What is Docker? Docker is a containerization platform that allows you to package applications along with dependencies into lightweight, portable containers. 🔹 Why Docker? ✅ Consistency across environments (Dev → Test → Prod) ✅ Faster deployments ✅ Lightweight compared to VMs ✅ Easy scalability & rollback 🔹 Core Concepts 📦 Image → Blueprint of application 📦 Container → Running instance of an image 📦 Dockerfile → Script to build images 📦 Volume → Persistent storage 📦 Network → Communication between containers 🔹 Basic Commands docker build -t app . docker run -d -p 80:80 app docker ps docker stop <container_id> docker rm <container_id> 🔹 Docker Workflow Code → Dockerfile → Image → Container → Deploy 🚀 🔹 Real-time DevOps Use Case ✔ Microservices deployment ✔ CI/CD pipeline integration ✔ Cloud deployment (AWS ECS, Kubernetes) ✔ Environment consistency for teams 🔹 Common Issues I Faced ⚠ Port already in use (80/8080 conflict) ⚠ Container not starting due to config errors ⚠ Image size optimization challenges 🔹 Best Practices ✔ Use small base images (Alpine) ✔ Write efficient Dockerfiles ✔ Use .dockerignore ✔ Tag images properly ✔ Avoid running containers as root 💡 Final Thought: Docker is not just a tool, it’s a foundation skill for every DevOps Engineer. #Docker #DevOps #Cloud #AWS #Containers #CI_CD #LearningJourney
To view or add a comment, sign in
-
🔥 𝐘𝐨𝐮𝐫 𝐓𝐞𝐚𝐦 𝐌𝐞𝐫𝐠𝐞𝐬 𝐭𝐨 𝐌𝐚𝐢𝐧. 𝐍𝐨𝐛𝐨𝐝𝐲 𝐓𝐨𝐮𝐜𝐡𝐞𝐬 𝐀𝐧𝐲𝐭𝐡𝐢𝐧𝐠. 𝐈𝐭 𝐃𝐞𝐩𝐥𝐨𝐲𝐬 𝐈𝐭𝐬𝐞𝐥𝐟. [Azure DevOps — Day 1 of 5] Code merged at 9am. Tests ran. Image built. Deployed to AKS. All done before the engineer finished their coffee. That is an Azure DevOps pipeline doing its job. A pipeline is an automated assembly line for your code. Every stage runs on an agent — a machine that executes your instructions. A git push to main triggers it. Stage one builds your Docker image and pushes it to ACR. Stage two runs every unit and integration test. One failure and the pipeline stops cold. Nothing broken ever reaches your cluster. Stage three deploys to AKS — kubectl apply or helm upgrade. An approval gate holds production deploys until a human signs off. The agent is the engine room — Microsoft-hosted Linux or Windows, or your own self-hosted machine. yaml trigger: branches: include: [main] stages: - stage: Build jobs: - job: BuildImage pool: vmImage: 'ubuntu-latest' steps: - task: Docker@2 inputs: command: buildAndPush repository: myapp containerRegistry: myACR - stage: Deploy jobs: - deployment: DeployAKS environment: production strategy: runOnce: deploy: steps: - task: KubernetesManifest@0 inputs: action: deploy manifests: deployment.yaml 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐑𝐞𝐚𝐝𝐲? Q: What is an agent in Azure DevOps? The machine that executes every step in your pipeline. Microsoft-hosted agents are fresh VMs spun up per run — clean, disposable, zero maintenance. Self-hosted agents are machines you manage — useful when you need access to private networks or custom tools. Q: What is the difference between a stage and a job? A stage is a major phase — Build, Test, Deploy. A job is a unit of work inside a stage — runs on one agent, executes steps sequentially. Multiple jobs in a stage can run in parallel on separate agents. Q: Where do you define a global variable in Azure DevOps? Under Pipelines → Library — not inside the YAML file. Library variables are reusable across pipelines. YAML variables are pipeline-scoped — invisible to other pipelines in the same project. Q: Can you call a Library variable from a different project? No. Libraries are project-scoped in Azure DevOps. A variable group created in Project A cannot be referenced directly from Project B. What part of your Azure DevOps pipeline has caused the most production pain? Drop it below. 👇 #DevOps #AzureDevOps #CICD #Pipelines #CloudEngineering #DevOpsInterview #Azure #AKS #Kubernetes
To view or add a comment, sign in
-
Explore related topics
- Middleware Tools for Kubernetes Pod Monitoring
- How to Debug Code in Kubernetes Pods
- Securing Kubernetes Pods Without Third-Party Tools
- Troubleshooting Kubernetes Pod Creation Issues
- How to Manage Pod Balancing in Kubernetes
- Best Practices for Preparing Kubernetes Pods
- How to Automate Kubernetes Stack Deployment
- Troubleshooting Unreachable Kubernetes Pods
- Automating Development and Testing Workflows in Kubernetes
- Advanced Kubernetes Management Tools for IT Professionals
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development