🔧 Lab Title: 5 - Main kubectl commands 🚀 Project Steps PDF Your Easy-to-Follow Guide :https://lnkd.in/gRdiBmaz 🔗 GitLab Repo Code:https:https://lnkd.in/gKfqqhiv 🔗 DevsecOps Portfolio:https://lnkd.in/g6AP-FNQ 💼 DevOps Portfolio: https://lnkd.in/gT-YQE5U 🔗 Kubernetes Portfolio:https://lnkd.in/gUqZrdYh 🔗 GitLab CI/CD Portfolio:https://lnkd.in/g2jhKsts Summary: Today, I worked on managing Kubernetes deployments using kubectl. I listed active deployments and pods, deleted MongoDB and NGINX deployments, verified resource cleanup, created a new NGINX deployment with a YAML manifest, and scaled it from 1 to 2 replicas. This reinforced declarative management of workloads and dynamic scaling in Kubernetes. Tools Used: kubectl: Manage and inspect Kubernetes resources via CLI. YAML: Define Kubernetes deployments declaratively. Skills Gained: • Inspect deployments, pods, and replicasets to monitor cluster state 🔍 • Delete deployments and confirm cleanup of related pods and replicasets 🧹 • Create declarative YAML manifests for Kubernetes workloads 📄 • Apply and update deployments using kubectl apply for scaling 🛠️ Challenges Faced: • Ensuring complete cleanup of replicasets after deployment deletion. • Correctly editing YAML files to update replica counts. Why It Matters: This lab strengthens core Kubernetes skills: declarative resource management, deployment lifecycle control, and workload scaling. Mastery of these practices is essential for effective cluster operation and application reliability in production environments. 📌 hashtag#Kubernetes hashtag#kubectl hashtag#YAML hashtag#Deployments hashtag#Scaling hashtag#CloudNative hashtag#DevOps 🚀 Next: 6 - YAML Configuration File!
Mastering kubectl: Kubernetes Deployments and Scaling
More Relevant Posts
-
🚀 DevOps Journey — Day 5: The GitOps Leap 🐙 Yesterday I mastered the HPA control loop. Today, I removed myself from the deployment equation. I moved my laboratory from traditional Push-based CI/CD to a Declarative GitOps model using ArgoCD. 🔬 What changed? Until today, my GitHub Actions pipeline was responsible for "shouting" orders to the cluster (helm upgrade --install). If the connection failed or the runner had issues, the deployment broke. Now, the cluster has its own "brain". 🧠 How it works now CI Phase: GitHub Actions only builds the Docker image and pushes it to GHCR (versioned by SHA). CD Phase (The GitOps way): ArgoCD monitors my Helm charts in Git. Reconciliation: If I change a single line in Git (like increasing replicas), ArgoCD detects the "drift" and pulls the changes into the cluster. 🛡️ The "Self-Healing" Test I decided to play "Chaos Engineering": I manually deleted a Pod and a Service using kubectl. The result? In less than 5 seconds, ArgoCD detected the state didn't match Git and recreated everything automatically. The cluster is now "self-healing". It doesn't care what I do manually; it only obeys the Source of Truth: Git. 🛠️ The "WSL2 vs Networking" Battle It wasn't all easy. Running ArgoCD inside a k3d cluster on WSL2 brought some real-world troubleshooting: MTU issues: Network packets were too large for the WSL tunnel, causing timeouts with GitHub. Liveness Probes: In a local environment, ArgoCD's repo-server needed more "patience" (timeouts increased from 1s to 10s) to handle the load. Lesson: In production, networking and resource constraints are your real enemies. If you don't tune your probes and MTU, your "automated" system becomes a "restarting" system. 🧪 What the lab now demonstrates: ✔ GitOps Workflow: Decoupled CI and CD. ✔ Drift Detection: Absolute consistency between Git and Production. ✔ Manual Override Protection: The cluster reverts unauthorized changes. ✔ Infrastructure as Code (IaC): Everything, from the HPA to the ArgoCD app, is defined as code. This isn't just a deployment anymore. It's an Operating Model. 🧭 Next stop: Making the entire cluster reproducible with Terraform. Building production-style systems in public. From "Push" to "Pull". One reconciliation loop at a time. https://lnkd.in/dPdqK99h #DevOps #Kubernetes #GitOps #ArgoCD #CloudEngineering #PlatformEngineering #SRE #BuildingInPublic #WSL2
To view or add a comment, sign in
-
🚀 After learning Docker, I wanted to understand what happens beyond containers — how they are orchestrated, exposed, and scaled. That’s where Kubernetes came in. Over the past few weeks, I focused on building and understanding a complete CI/CD pipeline and DevOps workflow step by step. ⚙️ Here’s what I worked on: Started with Kubernetes fundamentals: → Pods, Deployments, Services — and how they interact → Why Pods aren’t exposed directly → ConfigMaps, Secrets, and storage (PV, PVC, StorageClass) Then moved to hands-on: → Set up a local Kubernetes cluster using Minikube → Deployed multi-service applications → Debugged real issues (networking, permissions, ingress errors) Then explored Helm: → Converted raw YAML into reusable Helm charts → Used values.yaml for dynamic configuration → Reduced repetitive configuration across services Finally, built a complete CI/CD pipeline: → Dockerized the application → Pushed images to Docker Hub → Set up and configured Jenkins → Integrated Jenkins with Docker, Kubernetes, and Helm → Automated build → push → deploy workflow Pipeline now looks like: Code → Jenkins → Docker → Docker Hub → Helm → Kubernetes 📈 What changed after this: → Automated deployment workflow end-to-end → Able to redeploy the full stack in minutes → Stronger understanding of how services behave inside a Kubernetes cluster 🧩 Faced real-world challenges like: – Kubernetes permission errors (RBAC) – Ingress returning 403 – Volume overriding application data – Jenkins pipeline failures – Docker authentication issues Fixing these gave me a much deeper understanding than just following tutorials. Currently focusing on DevOps and cloud-native systems. Next focus: making this pipeline more secure and production-ready. Curious how others approached learning Kubernetes and CI/CD. #DevOps #Kubernetes #Docker #Jenkins #Helm #CICD #CloudNative #Containerization #LearningInPublic
To view or add a comment, sign in
-
🚀 From Code to Production: The Real DevOps Pipeline That Matters Most people learn DevOps tools one by one… 👉 GitHub 👉 Jenkins 👉 Docker 👉 Kubernetes But here’s the reality: Knowing tools ≠ Knowing DevOps The real value comes from understanding how everything connects into a single, automated workflow. 🔄 The End-to-End CI/CD Flow (Simplified) ⚙️ CI Pipeline (Build + Scan) Developer pushes code to GitHub Jenkins picks it up and triggers the pipeline OWASP Dependency Check scans for vulnerable libraries SonarQube analyzes code quality & security Docker builds the application image Trivy scans the image for vulnerabilities Image is pushed to a container registry 🚀 CD Pipeline (Deploy) Jenkins updates the latest image version Changes are committed back to GitHub Argo CD detects updates automatically Application is deployed to Kubernetes 📊 Monitoring & Alerts Prometheus collects metrics Grafana builds dashboards Alerts & notifications keep teams informed in real-time 💡 What Companies Actually Expect This is not about memorizing commands. It’s about understanding: ✅ CI → Build, test, and secure your code ✅ CD → Automate deployments with confidence ✅ Security → Shift-left and catch issues early ✅ Monitoring → Gain full visibility in production 👉 This is the difference between a beginner and a production-ready DevOps engineer. 🙌 Credits Big thanks to TrainWithShubham for simplifying and sharing such practical DevOps workflows 🙏 🧠 Final Thought If you’re learning DevOps, stop thinking in tools… Start thinking in pipelines, automation, and system design. That’s where real engineering begins. 📌 Follow for more insights on DevOps, Kubernetes, AWS, and real-world CI/CD pipelines #DevOps #CICD #Kubernetes #Docker #Jenkins #GitHub #CloudNative #Automation #SRE #DevOpsEngineer #Linux #AWS #GCP #Azure #InfrastructureAsCode #Monitoring #Security #ShiftLeft #ArgoCD #Prometheus #Grafana
To view or add a comment, sign in
-
-
"The future of Kubernetes management is GitOps. Many teams are unaware of the transformative potential of ArgoCD and Flux." I remember the exact moment when our Kubernetes deployment process evolved. We had just faced a critical production issue that exposed a major flaw in our manual configuration system. It was late, everyone was on edge, and we needed a solution that could prevent such mishaps in the future. In the heat of the moment, we turned to GitOps principles to regain control. Our journey began with two powerful tools: ArgoCD and Flux. These tools promised what we desperately needed—an automated, reliable, and auditable deployment process driven by our Git repository. The challenge was integrating these seamlessly into our existing workflow. There were initial hiccups: learning curves, reworking configurations to fit the GitOps model, and adjusting our deployment pipelines. But the promise of consistency and visibility kept us motivated. We started with a simple application deployment using Flux. Setting up the manifest in the repository and watching the changes automatically reflected in the cluster felt like magic. It was 'vibe coding' in its truest form—letting the configurations flow naturally from Git to Kubernetes with minimal human intervention. ```yaml apiVersion: v1 kind: Namespace metadata: name: demo --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx namespace: demo spec: replicas: 3 template: spec: containers: - name: nginx image: nginx:1.19.10 ``` With ArgoCD, the visualization of our deployments brought a new level of transparency to our operations. The UI allowed us to see exactly what was going on at any point—a true game changer. Over time, GitOps didn't just enhance our deployment speed; it transformed our response strategy. Instead of panic, there was a calm, methodical approach to any issue, focused on source control rather than firefighting in production. The lesson? Investing in GitOps workflows not only stabilized our deployments but also heightened our team's confidence and productivity. It changed our approach from reactive to proactive. How have you approached automation in your deployments? Have you tried integrating GitOps in your workflow? #DevOps #CloudComputing #Kubernetes
To view or add a comment, sign in
-
-
Last week, I shared how I built a CI/CD pipeline using Azure DevOps… But I realized something: 👉 I was still controlling deployments manually. So I started looking into a different approach: 👉 GitOps --- 💡 What is GitOps? GitOps means: «“Git is the single source of truth for your infrastructure and deployments.”» Instead of manually deploying… 👉 You just update Git 👉 And everything else happens automatically --- 🔁 What changed Before (CI/CD): • Pipeline builds & pushes images • Pipeline triggers deployment Now (GitOps): • Pipeline only builds & pushes images • Git becomes the source of truth • Deployment is fully automated --- ⚙️ Where ArgoCD fits To implement GitOps, I used: 👉 ArgoCD It continuously: • Monitors the Git repository • Detects changes (e.g., Helm values update) • Syncs automatically with Kubernetes • Deploys the new version --- 🔄 Full workflow 1️⃣ Code change • Developer pushes code 2️⃣ CI Pipeline • Build + test + scan • Push image to registry 3️⃣ Git update • Update Helm chart / values.yaml 4️⃣ ArgoCD • Detects change • Deploys automatically • Keeps cluster in sync (self-healing) --- 🔥 Why this is powerful • No manual deployments • Full traceability (everything in Git) • Automatic sync & self-healing • Easy rollback (just revert in Git) --- 💡 Mindset shift From: 👉 “Run deployment manually” To: 👉 “Push to Git… and let the system handle everything” --- 🚀 What’s next Now the platform looks like this: • CI/CD ✅ • GitOps ✅ Next step: 👉 Observability & Monitoring Prometheus + Grafana + AlertManager 📊 --- #DevOps #GitOps #ArgoCD #Kubernetes #Helm #CloudNative #PlatformEngineering #SRE #Automation #LearningInPublic
To view or add a comment, sign in
-
-
🚀 DevOps in Action: Kubernetes Pod Monitoring with Shell Script In real-world DevOps, it’s not just about deployments — it’s about visibility, reliability, and quick recovery. Here’s a simple shell script I use to monitor Kubernetes pods and detect issues early 👇 #!/bin/bash set -euo pipefail NAMESPACE="default" LOGFILE="k8s_monitor.log" log() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE" } # Get pods not in Running or Completed state pods=$(kubectl get pods -n "$NAMESPACE" --no-headers | awk '$3!="Running" && $3!="Completed" {print $1}') if [ -z "$pods" ]; then log "All pods are running fine ✅" else log "Found problematic pods ❌" for pod in $pods; do log "Checking pod: $pod" kubectl describe pod "$pod" -n "$NAMESPACE" | grep -i "error" | tee -a "$LOGFILE" done fi 💡 What this script demonstrates: ✅ Kubernetes monitoring using kubectl ✅ Log parsing with awk and grep ✅ Error detection for non-running pods ✅ Timestamp-based logging ✅ Production-ready practices (set -euo pipefail) 🔁 CI/CD Tip: You can integrate this script into a Jenkins/GitHub Actions pipeline step to fail deployments if pods are unhealthy. 👉 That’s how you move from simple scripting to proactive infrastructure monitoring. How do you monitor your Kubernetes workloads? 👇 #DevOps #Kubernetes #ShellScripting #CICD #SRE #Cloud #Monitoring
To view or add a comment, sign in
-
From GitHub → Jenkins → Docker → Kubernetes - complete DevOps workflow. Many people learn DevOps tools individually. But the real value comes from understanding how these tools work together in a real pipeline. Here’s a simplified breakdown of the 𝐞𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐂𝐈/𝐂𝐃 𝐟𝐥𝐨𝐰 shown in the diagram 𝐂𝐈 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐁𝐮𝐢𝐥𝐝 & 𝐒𝐜𝐚𝐧) ‣ Developer pushes code to GitHub ‣ Jenkins CI pulls the code and triggers the pipeline ‣ OWASP Dependency Check scans for vulnerable libraries ‣ SonarQube performs code quality & security analysis ‣ Docker builds the image ‣ Trivy scans the image for vulnerabilities ‣ Image is pushed to the registry 𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐃𝐞𝐩𝐥𝐨𝐲) ‣ Jenkins CD updates the image version ‣ Changes pushed back to GitHub ‣ ArgoCD pulls the latest changes ‣ Deploys application to Kubernetes 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐀𝐥𝐞𝐫𝐭𝐬 ‣ Prometheus collects metrics ‣ Grafana visualizes dashboards ‣ Email notifications for pipeline status 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐞𝐱𝐩𝐞𝐜𝐭 𝐲𝐨𝐮 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: ‣ CI (build + scan) ‣ CD (deploy + automate) ‣ Security (shift-left approach) ‣ Monitoring (production visibility) If you are preparing for DevOps interview, check out my 𝐃𝐞𝐯𝐎𝐩𝐬 𝐈𝐧𝐭𝐞𝐫𝐯𝐢𝐞𝐰 𝐆𝐮𝐢𝐝𝐞 Get it here: https://lnkd.in/dtkAGrH6 Use coupon code: 𝐃𝐄𝟐𝟎 #DevOps #CICD #Kubernetes #Docker #DevOpsEngineer #SystemDesign #Docker
To view or add a comment, sign in
-
-
GitOps-Based DevOps Pipeline with Kubernetes, ArgoCD & CI/CD I’m excited to share my recent project — Civic Issue Reporter, a full-stack application designed to report and track civic/environmental issues, where I implemented a complete end-to-end DevOps workflow using modern cloud-native tools. 🔹 Project Overview The goal of this project was not just to build an application, but to design a production-like deployment system where everything is automated — from code changes to deployment on Kubernetes. 🔹 What I Built • Containerized the frontend and backend using Docker with optimized multi-stage builds • Deployed the application on Azure Kubernetes Service (AKS) using Kubernetes Deployments and Services • Packaged the application using Helm, enabling reusable and environment-based configurations • Implemented GitOps using ArgoCD, where the cluster state is automatically synchronized with the Git repository • Built a CI pipeline using GitHub Actions to build Docker images and push them to Azure Container Registry (ACR) 🔹 End-to-End Workflow 👉 Git Push → GitHub Actions (CI) → ACR → ArgoCD (CD) → Kubernetes Deployment 🚀 This means every code change is automatically built, pushed, and deployed — without manual intervention. 🔹 Key Learning One of the most valuable parts of this project was solving real-world issues like: Handling Azure public IP limits Designing services using ClusterIP for better architecture Understanding how GitOps ensures consistency between desired and actual state 🔹 GitOps in Action For example, when I update the replicas count in values.yaml from 2 to 3 and push the change, ArgoCD detects it automatically, syncs the application, and Kubernetes scales the pods — fully automated. 📂 GitHub Repository: https://lnkd.in/gTHSPmeG This project gave me a strong understanding of how real-world DevOps systems are built using Docker, Kubernetes, Helm, CI/CD, and GitOps. I’m excited to keep learning and building more in the DevOps and cloud space 🚀 #DevOps #Kubernetes #GitOps #ArgoCD #Docker #CICD #Azure #CloudComputing #GitHubProjects #OpenToWork
To view or add a comment, sign in
-
Hey folks👋, 🚀 From GitHub → Jenkins → Docker: Understanding a Real DevOps Pipeline Many people focus on learning DevOps tools individually. But the real value lies in understanding how these tools work together in a complete CI/CD workflow. Here’s a simplified breakdown 👇 🔹 CI Pipeline (Build & Scan) • Developer pushes code to GitHub • Jenkins triggers the pipeline • OWASP Dependency Check scans for vulnerable libraries • SonarQube ensures code quality & security • Snyk performs security vulnerability scanning • CodeClimate analyzes code quality and maintainability • Docker builds the image • Trivy scans the image for vulnerabilities • Image is pushed to the registry 🔹 CD Pipeline (Deploy) • Jenkins updates the image version • Changes are pushed back to GitHub • Jenkins pulls the latest updates • Application is deployed using Docker 🔹 Monitoring & Alerts • Prometheus collects metrics • Grafana visualizes dashboards • Email notifications keep teams informed 💡 What companies really expect you to understand: ✔ CI → Build + Security checks ✔ CD → Deployment + Automation ✔ Security → Shift-left approach ✔ Monitoring → Production visibility 🎥 I’ve also created a complete end-to-end workflow video using NotebookLM to visualize this entire pipeline in action. 👉 Learning tools is good. 👉 Connecting them into a real pipeline is what makes you job-ready. #DevOps #CI_CD #Jenkins #Docker #GitHub #Monitoring #Cloud #notebooklm #Automation #DevSecOps
To view or add a comment, sign in
-
🚀 My DevOps Journey: From Manual Deployments to CI/CD Automation! The Problem: The "Manual Update" Loop 😫 While building my portfolio, I realized that manually uploading files every time I made a change was a bottleneck. In the world of DevOps, if a task is repetitive, it must be automated! The Challenge: Setting up my first GitHub Actions workflow wasn't a walk in the park. I faced: ❌ Action not found errors due to naming mismatches. ❌ Permissions hurdles between the Runner and GitHub Pages. ❌ Deprecation warnings regarding Node.js runtimes. The Solution: Understanding the Pipeline Logic 💡 I didn't just "fix" the code; I mastered the flow: 1️⃣ Checkout: Syncing the repository into the Cloud Runner. 2️⃣ Configure: Authenticating the machine for GitHub Pages. 3️⃣ Artifacts: Bundling static files into a secure, deployable package. 4️⃣ Deploy: Turning a simple git push into a live, global URL! The Result: ✅ My portfolio is now 100% automated. One commit, and the world sees the update. This is the power of a solid CI/CD pipeline! Next stop: Advanced Docker and Kubernetes orchestration. 🚀 #DevOps #GitHubActions #Automation #CloudComputing #CICD #SoftwareEngineering #LearningJourney #Bharatops #ZevixDigital #CloudEngineer #TechCommunity
To view or add a comment, sign in
Explore related topics
- Kubernetes Deployment Skills for DevOps Engineers
- Ensuring Reliability in Kubernetes Deployments
- How to Manage Pod Balancing in Kubernetes
- Kubernetes Deployment Tactics
- Kubernetes Lab Scaling and Redundancy Strategies
- How to Deploy Data Systems with Kubernetes
- Simplifying Kubernetes Deployment for Developers
- Kubernetes Deployment Strategies on Google Cloud
- Kubernetes Cluster Setup for Development Teams
- Best Practices for Deploying Apps and Databases on Kubernetes
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development