🚀 𝟗𝟎 𝐃𝐚𝐲𝐬 𝐨𝐟 𝐃𝐞𝐯𝐎𝐩𝐬 | 𝐋𝐞𝐚𝐫𝐧𝐢𝐧𝐠 𝐢𝐧 𝐏𝐮𝐛𝐥𝐢𝐜 | 𝐇𝐚𝐧𝐝𝐬-𝐎𝐧 | 𝐏𝐫𝐨𝐣𝐞𝐜𝐭𝐬 🛑 Hitting Pause: DevOps Revision Day! Day 28 of #90DaysOfDevOps is in the books! ✅ Before moving on to the cloud and CI/CD tools, today was entirely dedicated to revising everything covered over the last 4 weeks. To become a reliable DevOps Engineer or SRE, you can't just memorize commands; you need muscle memory. Today I re-tested myself on: 🔹 Linux system architecture & LVMs 🔹 Advanced text processing (awk, sed) - Use grep, awk, sed, sort, uniq for text processing 🔹 Writing strict, error-proof Bash scripts, Schedule scripts with crontab 🔹 Parallel Git workflows (Rebasing, Stashing, Cherry-picking) 🔹 Handle errors with set -e, set -u, set -o pipefail, trap 💡 My favorite exercise today: "Teach it Back" Einstein said, "If you can't explain it simply, you don't understand it well enough." I challenged myself to explain Git Branching to a non-technical person using the analogy of co-authoring a book. Git branching = "Safe recipe testing" (Check out my notes below to read it!) 🚀🔗💻 GitHub Repo: https://lnkd.in/dQAN6nWE #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Linux #ShellScripting #Git #SiteReliabilityEngineering
90 Days of DevOps Day 28: Revision Day
More Relevant Posts
-
From GitHub → Jenkins → Docker → Kubernetes - complete DevOps workflow. Many people learn DevOps tools individually. But the real value comes from understanding how these tools work together in a real pipeline. Here’s a simplified breakdown of the 𝐞𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐂𝐈/𝐂𝐃 𝐟𝐥𝐨𝐰 shown in the diagram 𝐂𝐈 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐁𝐮𝐢𝐥𝐝 & 𝐒𝐜𝐚𝐧) ‣ Developer pushes code to GitHub ‣ Jenkins CI pulls the code and triggers the pipeline ‣ OWASP Dependency Check scans for vulnerable libraries ‣ SonarQube performs code quality & security analysis ‣ Docker builds the image ‣ Trivy scans the image for vulnerabilities ‣ Image is pushed to the registry 𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐃𝐞𝐩𝐥𝐨𝐲) ‣ Jenkins CD updates the image version ‣ Changes pushed back to GitHub ‣ ArgoCD pulls the latest changes ‣ Deploys application to Kubernetes 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐀𝐥𝐞𝐫𝐭𝐬 ‣ Prometheus collects metrics ‣ Grafana visualizes dashboards ‣ Email notifications for pipeline status 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐞𝐱𝐩𝐞𝐜𝐭 𝐲𝐨𝐮 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: ‣ CI (build + scan) ‣ CD (deploy + automate) ‣ Security (shift-left approach) ‣ Monitoring (production visibility) #Kubernetes #Helm #DevOps #CloudNative #Containers #Pod #YAML #Kubernetes #ZeroToOne #Git #GitHub #Linux #VersionControl #Linux #CICD #Docker #Terraform #Script #AWS #GCP #Azure #SDLC #DevOpsLife #SRE #DevOpsEngineer #jenkins #devops #cicd #automation
To view or add a comment, sign in
-
-
From GitHub → Jenkins → Docker → Kubernetes - complete DevOps workflow. Many people learn DevOps tools individually. But the real value comes from understanding how these tools work together in a real pipeline. Here’s a simplified breakdown of the 𝐞𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐂𝐈/𝐂𝐃 𝐟𝐥𝐨𝐰 shown in the diagram 𝐂𝐈 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐁𝐮𝐢𝐥𝐝 & 𝐒𝐜𝐚𝐧) ‣ Developer pushes code to GitHub ‣ Jenkins CI pulls the code and triggers the pipeline ‣ OWASP Dependency Check scans for vulnerable libraries ‣ SonarQube performs code quality & security analysis ‣ Docker builds the image ‣ Trivy scans the image for vulnerabilities ‣ Image is pushed to the registry 𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐃𝐞𝐩𝐥𝐨𝐲) ‣ Jenkins CD updates the image version ‣ Changes pushed back to GitHub ‣ ArgoCD pulls the latest changes ‣ Deploys application to Kubernetes 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐀𝐥𝐞𝐫𝐭𝐬 ‣ Prometheus collects metrics ‣ Grafana visualizes dashboards ‣ Email notifications for pipeline status 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐞𝐱𝐩𝐞𝐜𝐭 𝐲𝐨𝐮 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: ‣ CI (build + scan) ‣ CD (deploy + automate) ‣ Security (shift-left approach) ‣ Monitoring (production visibility) #Kubernetes #Helm #DevOps #CloudNative #Containers #Pod #YAML #Kubernetes #ZeroToOne #Git #GitHub #Linux #VersionControl #Linux #CICD #Docker #Terraform #Script #AWS #GCP #Azure #SDLC #DevOpsLife #SRE #DevOpsEngineer #jenkins #devops #cicd #automation
To view or add a comment, sign in
-
-
Really love this 👏 — This is the architecture I built while implementing a full CI/CD pipeline with Jenkins, Docker, Kubernetes, and AWS on my learning journey. The real value isn’t just knowing tools like Jenkins, Docker, Kubernetes, and AWS — it’s understanding how they work together in a full CI/CD pipeline. 💡 One key addition: Terraform and Ansible power the foundation — Terraform provisions AWS infrastructure, Ansible configures it, and CI/CD handles build and deployment. That full flow is what makes DevOps powerful 🚀 #DevOps #AWS #Terraform #Ansible #Jenkins #Docker #Kubernetes #CICD
From GitHub → Jenkins → Docker → Kubernetes - complete DevOps workflow. Many people learn DevOps tools individually. But the real value comes from understanding how these tools work together in a real pipeline. Here’s a simplified breakdown of the 𝐞𝐧𝐝-𝐭𝐨-𝐞𝐧𝐝 𝐂𝐈/𝐂𝐃 𝐟𝐥𝐨𝐰 shown in the diagram 𝐂𝐈 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐁𝐮𝐢𝐥𝐝 & 𝐒𝐜𝐚𝐧) ‣ Developer pushes code to GitHub ‣ Jenkins CI pulls the code and triggers the pipeline ‣ OWASP Dependency Check scans for vulnerable libraries ‣ SonarQube performs code quality & security analysis ‣ Docker builds the image ‣ Trivy scans the image for vulnerabilities ‣ Image is pushed to the registry 𝐂𝐃 𝐏𝐢𝐩𝐞𝐥𝐢𝐧𝐞 (𝐃𝐞𝐩𝐥𝐨𝐲) ‣ Jenkins CD updates the image version ‣ Changes pushed back to GitHub ‣ ArgoCD pulls the latest changes ‣ Deploys application to Kubernetes 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 & 𝐀𝐥𝐞𝐫𝐭𝐬 ‣ Prometheus collects metrics ‣ Grafana visualizes dashboards ‣ Email notifications for pipeline status 𝐓𝐡𝐢𝐬 𝐢𝐬 𝐰𝐡𝐚𝐭 𝐜𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬 𝐞𝐱𝐩𝐞𝐜𝐭 𝐲𝐨𝐮 𝐭𝐨 𝐮𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝: ‣ CI (build + scan) ‣ CD (deploy + automate) ‣ Security (shift-left approach) ‣ Monitoring (production visibility) #Kubernetes #Helm #DevOps #CloudNative #Containers #Pod #YAML #Kubernetes #ZeroToOne #Git #GitHub #Linux #VersionControl #Linux #CICD #Docker #Terraform #Script #AWS #GCP #Azure #SDLC #DevOpsLife #SRE #DevOpsEngineer #jenkins #devops #cicd #automation
To view or add a comment, sign in
-
-
📘 Git Commands Cheat Sheet for DevOps Engineers In DevOps, Git is more than version control—it's the backbone of CI/CD, collaboration, and code management. I’ve compiled a practical Git commands guide covering: ✔️ Setup & configuration ✔️ Staging & committing ✔️ Branching & merging ✔️ Rebase & squash (clean history) ✔️ Undoing changes (reset, revert) ✔️ Debugging tools (log, diff, reflog, stash) 💡 Key takeaway: Understanding commands like git rebase, git squash, and git reset can significantly improve code history, collaboration, and troubleshooting in real-world projects. This guide is useful for: 🔹 DevOps Engineers working on pipelines & automation 🔹 Developers improving Git workflows 🔹 Beginners preparing for interviews #DevOps #Git #VersionControl #CI_CD #SRE #Cloud #SoftwareEngineering #Linux
To view or add a comment, sign in
-
Hey Techies 👋, DevOps Reality Check When even GitHub becomes unreachable.... Today’s task looked simple push code, trigger my Jenkins pipeline, and continue working on my Docker setup. But instead, I hit this: 👉 fatal: unable to access 'https://github.com/...' 👉 Could not resolve host: github.com At first, it felt like a blocker. But in DevOps, these “small” errors often teach the biggest lessons. After digging deeper, I realized the issue wasn’t with Git or Jenkins it was a DNS/network issue on my remote server (via SSH). How I solved it: - Checked internet connectivity on the remote machine - Verified DNS configuration in /etc/resolv.conf - Restarted network services - Ensured proper nameserver (like 8.8.8.8) was set - Re-tested using ping github.com And finally… connection restored, code pushed, pipeline back on track Key takeaway: 𝐍𝐨 𝐦𝐚𝐭𝐭𝐞𝐫 𝐡𝐨𝐰 𝐚𝐝𝐯𝐚𝐧𝐜𝐞𝐝 𝐲𝐨𝐮𝐫 𝐂𝐈/𝐂𝐃 𝐩𝐢𝐩𝐞𝐥𝐢𝐧𝐞 𝐢𝐬, 𝐞𝐯𝐞𝐫𝐲𝐭𝐡𝐢𝐧𝐠 𝐝𝐞𝐩𝐞𝐧𝐝𝐬 𝐨𝐧 𝐭𝐡𝐞 𝐛𝐚𝐬𝐢𝐜𝐬 𝐧𝐞𝐭𝐰𝐨𝐫𝐤𝐢𝐧𝐠 𝐚𝐧𝐝 𝐜𝐨𝐧𝐧𝐞𝐜𝐭𝐢𝐯𝐢𝐭𝐲. This was a reminder that DevOps isn’t just automation… It’s also patience, debugging, and understanding systems from the ground up. Have you ever been stuck because of something as simple as DNS? #DevOps #Jenkins #Docker #GitHub #CICD #Troubleshooting #LearningInPublic #WomenInTech #CloudComputing
To view or add a comment, sign in
-
-
Episode 10 of my journey to becoming a DevOps Engineer 🚀 In this episode, I’m diving into Docker and containerization. Before containerization, we relied heavily on virtual machines (VMs) to run multiple applications or services on a single server or PC. However, each VM requires its own operating system, which makes them heavy, slower to boot, and resource-intensive. To solve these challenges, containerization emerged. 1. In 2006, Cgroups were introduced 2. In 2008, LXC (Linux Containers) came along 3. In 2013, Docker was released — and it quickly became the most popular containerization platform Containers are lightweight because they share the host OS kernel. This means: 1. Faster startup times ⚡ 2. Better resource efficiency 💻 3. Reduced costs (time, infrastructure, and maintenance) 💰 🔧 Docker Runtime The runtime responsible for creating and managing containers is called containerd. The core server-side engine of Docker is known as dockerd (Docker daemon). 📦 Key Docker Components 1. Dockerfile – A script used to build Docker images 2. Image – A blueprint or snapshot of a container 3. Container – A running instance of an image 4. Volume – Persistent storage for containers 5. Network – Enables communication between containers Command for installing docker: sudo apt update sudo apt install docker.io sudo usermod -aG docker $USER sudo reboot For downloading an image: docker pull <image_name>:latest For running a container: docker run <image_name>:latest To execute something inside a running container docker exec -it <container_id> <what you want to execute> #AWS #Python #DevOps #Debugging #Learning #Programming #PDB #VSCode #CloudEngineering #CICD #Linux #GitHub #Git #bongoDev #Networking #InfrastructureAsCode #DevOpsJourney #CloudComputing #LearningInPublic
To view or add a comment, sign in
-
My "Indentation vs. Reality" DevOps Moment If you’ve ever spent an hour fighting a single space in a text file, you’re officially a DevOps Engineer. Today wasn't just about writing YAML; it was about the moment I realized that GitHub Actions isn't magic—it’s just a fresh, empty computer waiting for instructions. The "Ghost" Repository 👻 My first run failed. Why? Because I forgot the checkout step. I was staring at the logs thinking, "I pushed the code, why can't the runner see it?" That was my first big realization: The runner is a blank slate. It’s a brand-new Ubuntu machine that has never seen my code. Unless I explicitly tell it to "checkout" my repo, it’s just sitting there in an empty room. Breaking Things (The Real Learning) I hit a Command not found (exit code 127). The Culprit: A tiny syntax error in my echo command. The Lesson: In DevOps, a single misplaced space doesn't just look messy—it breaks the entire factory line. My "Aha!" Moments: Ephemeral Power: The runner lives, executes my steps, and then "dies." It’s fresh every single time. Shared Memory: I created a test.txt in Step 1 and successfully read it in Step 4. Seeing that persistence across steps made the "environment" concept click. The Hidden World: Running ls -la and seeing the .git folder inside the runner made me realize: "Okay, my code is actually physically here now." The Verdict Real learning isn't copying a perfect YAML template from a tutorial. It’s about breaking the pipeline, digging through the logs, and finally seeing that green checkmark appear. DevOps is 10% writing code and 90% understanding exactly why that code failed. Next up: Figuring out Environment Variables and Secrets. Let's see what I break tomorrow! 🛠️ #DevOps #GitHubActions #CICD #LearningInPublic #Cloud #Automation #Linux #TechJourney #BuildInPublic #DevOpsEngineer
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟯𝟬 𝗼𝗳 𝗺𝘆 𝗗𝗲𝘃𝗢𝗽𝘀 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 💻 Cleaning up Git history — used hard reset to remove unwanted commits and restore a clean state 🔥 𝗧𝗮𝘀𝗸: Git Hard Reset 𝗪𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • How `git reset --hard` rewrites commit history • Difference between `git revert` (safe) vs `git reset` (destructive) • Importance of identifying the correct commit before resetting • Why force push is required after history rewrite • Risks of using hard reset in shared environments 𝗪𝗵𝗮𝘁 𝗜 𝗯𝘂𝗶𝗹𝘁 / 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱: • Navigated to repo `/usr/src/kodekloudrepos/beta` • Checked commit history using `git log --oneline` • Identified target commit (`add data.txt file`) • Performed `git reset --hard <commit-id>` • Verified only required commits remain • Force pushed changes using `git push origin master --force` 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀: • Understanding when it’s safe to rewrite history • Ensuring correct commit is selected before reset • Awareness of impact on remote repositories 𝗙𝗶𝘅 / 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: • Learned that `git reset --hard` permanently removes commits • Understood why force push is necessary after reset • Realized this approach should be used carefully in team environments • Gained clarity on cleanup strategies for test repositories 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: With great power comes great responsibility — `git reset --hard` is powerful, but should be used only when you’re absolutely sure. This felt like performing a controlled cleanup in a real DevOps environment 🚀 When do you prefer using reset vs revert in your workflow? #Day30 #DevOps #Git #VersionControl #Linux #Automation #CloudComputing #AWS #DevOpsJourney #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
Had a great learning experience today attending a 2-hour Docker hands-on session by Vikas Ratnawat with the CloudDevOpsHub community. 🚀 The session was very practical and beginner-friendly. Instead of only discussing theory, we actually worked on real-time Docker concepts and implementations. Topics covered: 🔹 Understanding Docker Architecture 🔹 Creating and running containers 🔹 Setting up a web server inside Docker 🔹 Installing Jenkins using Docker What I liked most was the way every concept was explained with real-world examples and hands-on practice. It made the learning process simple, clear, and easy to apply in real projects. Overall, it was an informative and valuable session. Looking forward to attending more practical DevOps sessions like this! #Docker #DevOps #Jenkins #CloudComputing #Containerization #Learning #CloudDevOpsHub #VikasRatnawat
To view or add a comment, sign in
-
-
🐳 Docker Basics Made Simple: Named Volume vs Anonymous Volume Understanding Docker storage is a must for anyone in DevOps 🚀 Here’s a quick breakdown 👇 🔹 Named Volume ✔ Created with a specific name ✔ Easy to manage and reuse ✔ Ideal for production environments ✔ Example: docker run -d -v mydata:/app ubuntu 🔹 Anonymous Volume ✔ No name (auto-generated by Docker) ✔ Hard to track and reuse ✔ Mostly used for temporary data ✔ Example: docker run -d -v /app ubuntu ⚖️ Key Difference 👉 Named volumes are persistent and reusable 👉 Anonymous volumes are temporary and harder to manage ⚠️ Interview Tip Anonymous volumes are NOT automatically deleted when containers are removed — they can consume space if not cleaned up! 🧹 Cleanup command: docker volume prune 💡 Pro Tip Use named volumes in production and anonymous volumes for quick testing. #Docker #DevOps #CloudComputing #SRE #Containers #Learning #TechTips
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development