📌 Day 9 of My #30DaysOfDevOps Journey 🚀 Today, I continued building out my DevOps skills by working on process monitoring and repository organization. 🔹 I developed a Process Monitoring Script, this helps capture and track CPU and memory usage for processes in real time, which is an important step toward understanding system performance and reliability. 🔹 I also restructured my GitHub repository to organize each day’s work into separate folders. This keeps the project clean, scalable, and easier to navigate — a practice that mirrors real-world engineering standards. You can check out the project here 👉 https://lnkd.in/dN2Jvtt2 Consistent practice is helping me understand how DevOps engineers combine scripting, system insights, and version control to build dependable workflows. On to Day 10! 🚀 #DevOps #Linux #ShellScripting #ProcessMonitoring #GitHub #RepositoryManagement #LearningInPublic #CloudEngineering #30DaysOfDevOps #LearningWithTSAcademy #30DaysOfTech
Day 9 of 30 Days of DevOps: Process Monitoring and Repository Organization
More Relevant Posts
-
✨ Learning + Hands-on Project + Consistency = DevOps Engineer (Dream Job) 🚀 🚀 Strengthening My DevOps Foundations – Linux, Git & Docker Deep Dive After progressing with Ansible automation, I focused on strengthening my core DevOps fundamentals: 🔹 Advanced Git workflows • Branching, merging, rebasing • Cherry-pick, reset vs revert • Interactive rebase & squash • Understanding commit graph (DAG model) 🔹 Linux hands-on practice • File permissions, process monitoring • Service management • System resource analysis using top 🔹 Docker Architecture Practice • Built multi-container application using Docker Compose • Implemented Nginx as a reverse proxy • Explored internal vs external ports • Understood Docker bridge networking • Practiced container isolation and layered architecture Key Takeaway: DevOps is not just about tools — it's about understanding architecture, networking, and system behavior under failure conditions. Continuing to build production-level thinking step by step. #DevOps #Docker #Git #Linux #CloudLearning #Infrastructure #ContinuousLearning
To view or add a comment, sign in
-
🚀 Creating a Docker Image from a Running Container | Day 39/100 – DevOps Journey Today’s task in my 100 Days of DevOps challenge involved creating a Docker image from a running container — a common scenario when developers want to preserve changes made during testing. 🔹 What I worked on: - Identified the running container on the application server - Created a new Docker image from that container - Tagged the image as official:devops for future use - Learned how container state can be captured as an image 🔐 Why this matters in real environments: During development and testing, containers may be modified with new configurations or tools. Creating an image from that container allows teams to: - Preserve the tested environment - Share the setup with other developers - Reuse the configuration for future deployments Continuing to strengthen hands-on skills in Docker, container lifecycle management, and DevOps practices, while sharing my learning publicly. Github :- https://lnkd.in/gnBnU_cv #100DaysOfDevOps #DevOps #Docker #Containerization #Linux #CloudEngineering #LearningInPublic #TechCareers #OpenToWork #KodeKloud
To view or add a comment, sign in
-
-
🚀 DevOps Practice Update Today I practiced deploying a 2-tier application using Docker Compose. Here’s what I did step by step: 🔹 Built and ran the application containers using docker-compose up -d 🔹 Verified that the services were running correctly 🔹 Stopped and removed all containers 🔹 Re-ran the same command to recreate the entire environment 💥 Result: The whole application stack started again instantly and worked perfectly. This small practice shows the power of containerization and infrastructure reproducibility. With Docker Compose, you can define an entire multi-service architecture in one configuration file and bring it up or down with a single command. It's exciting to see how DevOps tools make application deployment faster, cleaner, and more reliable. I’m continuing my journey learning Docker, containers, and DevOps practices step by step. More experiments coming soon! #DevOps #Docker #DockerCompose #Containerization #CloudComputing #Linux #LearningInPublic #TechJourney #BackendDevelopment bongoDev
To view or add a comment, sign in
-
-
🚀 Month One of My DevOps Journey with Digilians – More Than Just Linux Commands The first month of my DevOps journey with Digilians has officially come to an end — and the biggest takeaway goes far beyond memorizing Linux commands. 🐧 It’s not about knowing how to run chmod or writing a simple Bash script. It’s about building the right foundation. 🔎 Week 4 Focus: Building a Strong Technical Foundation This week was centered around leveraging Linux with an Automation-driven mindset, including: 1️⃣ File Systems & Permissions Managing file systems with a security-first approach — understanding not just how permissions work, but why they matter in protecting infrastructure. 2️⃣ Process Management & System Monitoring Controlling processes efficiently and monitoring system performance in real time to ensure reliability and stability. 3️⃣ Bash Scripting for Automation Writing effective Bash scripts that eliminate repetitive manual tasks — allowing more time to focus on higher-value engineering work. 🛠 Practical Implementation To apply these concepts, I built: ✅ A Mini Task Manager This hands-on experience reinforced a powerful lesson: DevOps is not just about tools. It’s a mindset and a culture. It starts from the ground up — sometimes from the very first Bash script you write to eliminate unnecessary manual effort. The journey is just beginning… and I’m excited for what’s next. 🙏 #Digilians #DevOps_Digi #Digi_Software_DevOps #DevOpsJourney #Linux #Automation #ContinuousLearning
To view or add a comment, sign in
-
-
🚀 DevOps Journey – Hands-On Progress | Post #3 Continuing from my last post where I set up VMware on M1, explored Vagrant, and started touching Linux basics... Here's what I completed recently: 🔹 Linux Basics – Completed I wrapped up the core Linux fundamentals that every DevOps engineer needs: • Navigating the file system (cd, ls, pwd) • File & directory operations (cp, mv, rm, mkdir) • User & group permissions (chmod, chown) • Viewing & editing files (cat, vim, nano) • Process management (ps, kill, top) • Package management (apt / yum) Linux is the base layer of almost every server and cloud environment — getting this right before moving forward felt essential. 🔹 Git & Version Control – Basics Done Started and completed the fundamentals of Git: • Initializing repos and understanding the Git workflow • Staging, committing, and pushing changes • Branching and merging basics • Cloning remote repositories • Understanding why version control matters in team environments Git is everywhere — whether you're a developer or a DevOps engineer, you simply can't avoid it. 💡 What I'm Realizing: Every tool in DevOps builds on the previous one. Linux gives you control over the system. Git gives you control over the code. These two basics together are already making the next topics feel much more approachable. Still going — step by step. 🧱 If you're also learning DevOps or have tips for someone at this stage, I'd love to hear from you! 👇 #DevOps #Linux #Git #VersionControl #SoftwareEngineering #LearningInPublic #RemoteWork #Cloud
To view or add a comment, sign in
-
-
You Can’t Get Good At DevOps Without Doing DevOps. Most people want to become great engineers. But they’re stuck in “learning mode.” You can’t get good at Kubernetes without breaking a cluster. You can’t master CI/CD without deploying something that fails. You can’t understand Linux by just watching tutorials. You have to touch it. You have to break it. You have to fix it. Watching 50 DevOps videos won’t replace: • One real production issue • One late-night troubleshooting session • One pipeline you built from scratch Reading about infrastructure isn’t the same as provisioning it. Consuming content feels productive. Doing the work is productive. At some point, you have to leave the comfort of tutorials and go to the field. Build. Deploy. Debug. Repeat. That’s how engineers are made. #DevOps #CloudComputing #TechCareers #Linux #Kubernetes #GrowthMindset
To view or add a comment, sign in
-
Day 28 of #90DaysOfDevOps — Revision & Reflection A few days ago, I dedicated time to review everything I learned in the first 27 days of my DevOps journey. Instead of learning new concepts, this day was about strengthening the fundamentals and identifying areas that need more practice. Topics Revised :- 🔹 DevOps & Cloud Basics — SDLC, DevOps culture, cloud fundamentals 🔹 Linux Fundamentals — filesystem, processes, systemd, troubleshooting 🔹 Users & Permissions — managing users, groups, and file permissions 🔹 LVM & Networking — storage management, DNS, IP, ports, connectivity checks 🔹 Shell Scripting — variables, loops, functions, automation scripts 🔹 Git & GitHub — branching, merging, rebasing, stash, reset, revert 🔹 GitHub CLI & Profile Branding What I Focused On :- ✔ Self-assessment of Linux, Shell scripting, and Git skills ✔ Revisiting topics where I needed more clarity ✔ Answering quick-fire DevOps questions from memory ✔ Organizing and verifying all work from Day 1 – Day 27 in my GitHub repository 💡 Key Takeaway In DevOps, strong fundamentals are more important than rushing into new tools. Taking time to revise and practice ensures long-term understanding. 🔗 GitHub Repository https://lnkd.in/gn7iU4KF Documented my revision notes here: 📂 https://lnkd.in/gh6Rx-uf The journey continues toward more automation, containers, and infrastructure tools ahead. #DevOps #90DaysOfDevOps #Linux #Git #Automation #LearningInPublic #CloudComputing #OpenSource #DevOpsJourney #TrainwithShubham
To view or add a comment, sign in
-
Day 1 of #DockerIn14Days I used to think deployment issues were just part of the job. Turns out — they're a solved problem. And I was just late to the solution. Docker packages your app and everything it needs into a single container. That container runs identically everywhere. Your machine. Your teammate's machine. Production server. All the same. No surprises. No dependency mismatches. No "works on my machine" moments. No 2am deployment fires. Simple in theory. Powerful in practice. I'm covering one Docker concept a day for 14 days — starting from the basics and going all the way to CI/CD pipelines. If you're learning DevOps or just Docker curious — follow along. This one's for us 🐳 Day 2 tomorrow: Containers vs VMs — they're not the same thing and the difference will change how you think about infrastructure. #DockerIn14Days #Docker #DevOps #DevOpsEngineer #Linux #Containerization #LearnInPublic #CloudComputing #SoftwareEngineering #devopswithrahul
To view or add a comment, sign in
-
-
🚀 Selective Commit Merge with Git Cherry-Pick | Day 28/100 – DevOps Journey Today’s task in my 100 Days of DevOps from KodeKloud challenge was simple but reflects an important real-world workflow — merging a specific commit from a feature branch into the main branch without merging the entire branch. 🔹 What I worked on: - Identified the required commit from the feature branch - Merged only that commit into the master branch using a targeted approach - Maintained ongoing feature development without disruption - Pushed the updated changes to the remote repository 🔐 Why this matters in real environments: Sometimes teams need a bug fix or small update immediately while larger feature work is still in progress. Selective merging helps deliver changes faster without risking unfinished work. 💡 Key takeaway: Git allows precise control over what gets released — not every change needs a full branch merge. Github :- https://lnkd.in/gnBnU_cv Continuing to strengthen hands-on experience in Git workflows, Linux, and collaborative DevOps practices, while sharing my learning publicly. #100DaysOfDevOps #DevOps #Git #CherryPick #VersionControl #Linux #CloudEngineering #LearningInPublic #TechCareers #OpenToWork #KodeKloud
To view or add a comment, sign in
-
-
🚀 What is Helm in Kubernetes? When working with Kubernetes, managing multiple YAML files for deployments, services, configs, and secrets can quickly become complex. This is where Helm helps. Helm is the package manager for Kubernetes that allows DevOps engineers to package Kubernetes resources into reusable units called Helm Charts. 📌 Why Helm is important ✔ Simplifies Kubernetes deployments ✔ Packages multiple YAML files into a single chart ✔ Supports reusable templates ✔ Provides version control and easy rollbacks Helm works similar to APT/YUM for Linux or npm for Node.js, but specifically designed for Kubernetes applications. #DevOps #Kubernetes #Helm #CloudComputing #PlatformEngineering #LearningInPublic #DevOpsEngineer
To view or add a comment, sign in
-
Explore related topics
- DevOps Principles and Practices
- DevOps Engineer Core Skills Guide
- DevOps for Cloud Applications
- Key Skills for a DEVOPS Career
- Secure DevOps Practices
- How to Optimize DEVOPS Processes
- Tips for Continuous Improvement in DevOps Practices
- Integrating DevOps Into Software Development
- Monitoring and Logging Solutions
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development