Day 5/100: Setting up the Command Line Lab 🖥️ Today was all about the terminal. No "Next-Next-Finish" installers—just pure Linux package management to set up my DevOps workstation. I spent the day configuring my local environment on a RedHat-based system using dnf and yum. Setting up these tools via the CLI is the best way to understand how we'll eventually manage thousands of servers in the cloud. What’s now ready in my lab: 🏗️ Virtualization: VirtualBox 7.1 & Vagrant (ready to spin up test VMs). ☕ Java Stack: OpenJDK 17 & Maven (for building enterprise-grade apps). 🌿 Version Control: Git (installed and ready for the first commit). 📝 Editor: VS Code (configured via the official Microsoft repo). The 'Aha!' Moment: Realizing that every tool we use—from Vagrant to VS Code—can be installed and updated using simple commands. This is the first step toward Infrastructure as Code (IaC). With the lab officially built, I'm ready to dive into the core of every DevOps engineer's life: Some Sign-up #100DaysOfDevOps #100DaysOfDevOpsChallenge #DevOps #Linux #Vagrant #VirtualBox #Automation #LearningInPublic #CloudEngineer
Setting up DevOps workstation with Linux package management
More Relevant Posts
-
🚨 The first time I opened a Linux server… I saw folders like `/bin`, `/etc`, `/var`, `/home` and thought: “What is all this? And why is everything starting with `/`?” 😅 That confusion led me to learn something very important for DevOps. 💡 𝘋𝘢𝘺 9 𝘰𝘧 𝘮𝘺 𝘋𝘦𝘷𝘖𝘱𝘴 𝘑𝘰𝘶𝘳𝘯𝘦𝘺 Today I learned about the 𝗟𝗶𝗻𝘂𝘅 𝗙𝗶𝗹𝗲 𝗦𝘆𝘀𝘁𝗲𝗺 — the structure that organizes everything inside a Linux system. --- 📖 Think of it like a 𝗺𝗮𝗽 𝗼𝗳 𝘁𝗵𝗲 𝗲𝗻𝘁𝗶𝗿𝗲 𝗼𝗽𝗲𝗿𝗮𝘁𝗶𝗻𝗴 𝘀𝘆𝘀𝘁𝗲𝗺. In Linux, everything starts from a single root directory: 👉 `/` (root) From there, the system branches into different directories, each with a specific purpose. --- Here are some important ones I learned today: 📁 /𝗯𝗶𝗻 Contains essential command binaries like `ls`, `cp`, `mv`. 📁 /𝗲𝘁𝗰 Stores system configuration files. 📁 /𝗵𝗼𝗺𝗲 Personal directories for users. 📁 /𝘃𝗮𝗿 Contains logs, cache, and variable data. 📁 /𝘂𝘀𝗿 Stores system programs and utilities. 📁 /𝘁𝗺𝗽 Temporary files used by applications. --- 🚀 𝘞𝘩𝘺 𝘵𝘩𝘪𝘴 𝘮𝘢𝘵𝘵𝘦𝘳𝘴 𝘪𝘯 𝘋𝘦𝘷𝘖𝘱𝘴? Because when you manage servers, deploy applications, or troubleshoot issues… You constantly interact with these directories. Knowing 𝘄𝗵𝗲𝗿𝗲 𝘁𝗵𝗶𝗻𝗴𝘀 𝗹𝗶𝘃𝗲 𝗶𝗻 𝗟𝗶𝗻𝘂𝘅 saves a lot of time when debugging systems. --- 🔥 𝘔𝘺 𝘣𝘪𝘨𝘨𝘦𝘴𝘵 𝘳𝘦𝘢𝘭𝘪𝘻𝘢𝘵𝘪𝘰𝘯 𝘵𝘰𝘥𝘢𝘺: Linux is not random. Every directory has a 𝘀𝗽𝗲𝗰𝗶𝗳𝗶𝗰 𝗿𝗼𝗹𝗲 𝗶𝗻 𝗸𝗲𝗲𝗽𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝘆𝘀𝘁𝗲𝗺 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗲𝗱 𝗮𝗻𝗱 𝘀𝘁𝗮𝗯𝗹𝗲. --- 📌 𝘚𝘮𝘢𝘭𝘭 𝘬𝘯𝘰𝘸𝘭𝘦𝘥𝘨𝘦 𝘭𝘪𝘬𝘦 𝘵𝘩𝘪𝘴 𝘣𝘦𝘤𝘰𝘮𝘦𝘴 𝘱𝘰𝘸𝘦𝘳𝘧𝘶𝘭 𝘸𝘩𝘦𝘯 𝘸𝘰𝘳𝘬𝘪𝘯𝘨 𝘸𝘪𝘵𝘩 𝘴𝘦𝘳𝘷𝘦𝘳𝘴 𝘢𝘯𝘥 𝘤𝘭𝘰𝘶𝘥 𝘪𝘯𝘧𝘳𝘢𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦. And that’s exactly what DevOps engineers deal with daily. --- 💬 𝘋𝘦𝘷𝘖𝘱𝘴 𝘦𝘯𝘨𝘪𝘯𝘦𝘦𝘳𝘴 𝘩𝘦𝘳𝘦 — Which Linux directory confused you the most when you started? 😅 --- Learning step by step 🚀 #DevOps #Linux #LinuxFileSystem #LearningInPublic #DevOpsInsiders #TechJourney
To view or add a comment, sign in
-
-
🔐 Day 4 of #100DaysOfDevOps — Linux file permissions Today's task: a backup script existed on a production server but no one could run it. One missing permission bit was the culprit. Here's what I learned: Every Linux file has a 10-character permission string like -rw-r--r-- It's split into 3 groups: → Owner (the user who created it) → Group (a team sharing access) → Others (everyone else) Each group gets 3 bits: r (read=4), w (write=2), x (execute=1) The fix was one command: chmod a+x /tmp/xfusioncorp.sh or chmod 755 /tmp/xfusioncorp.sh a = all users | +x = add execute permission Before: -rw-r--r-- (no one can run it) After: -rwxr-xr-x (everyone can run it) Why does this matter in DevOps? → Automation scripts fail silently when permissions are wrong → CI/CD pipelines break if deploy scripts aren't executable → Every cloud server you ever manage will need this The numeric equivalent: chmod 755 Meaning: owner gets rwx (7), group gets r-x (5), others get r-x (5) "r = read, w = write, x = execute. Three bits. Three groups. That's all of Linux permissions." #DevOps #Linux #BashScripting #chmod #CloudEngineering KodeKloud
To view or add a comment, sign in
-
-
🚀 Day 12: Linux Internals for DevOps Engineers (Advanced) 👉 Disk Issues in Production (Not as simple as you think) Most people think: ❌ Disk full → delete files → done But real production issues are more complex. Today I explored how engineers actually debug disk-related failures. 📌 What I learned: 🔹 `df -h` shows disk usage 🔹 `du -sh` helps trace large directories 🔹 Sometimes deleted files still occupy space (hidden usage) 🔹 Log rotation is critical to prevent repeated failures 💡 Real Scenario: Disk shows 100% usage… But you can’t find large files. Why? 👉 Because deleted files are still held by running processes. Solution: ✔ Use `lsof | grep deleted` ✔ Restart the process This is something most beginners don’t know. 🧠 Question for you: Have you ever faced a situation where disk was full but you couldn’t find the files causing it? 👇 Would love to know your experience! 🎯 Learning Goal: To debug storage issues deeply and prevent recurring failures. 📅 Day 13 Tomorrow: Networking Basics (IP, Ports, DNS) Let’s keep going deeper 🚀 #DevOps #Linux #SRE #Storage #CloudComputing #SoftwareEngineering #TechLearning #LearningInPublic #ITCareers #EngineeringMindset #CareerGrowth #ProductionIssues
To view or add a comment, sign in
-
DevOps can look very polished from the outside. • Cloud dashboards • Automated pipelines • Clean web interfaces • Seamless deployments Everything feels fast, modern, and under control 🚀 Until production breaks. And then… everything shifts back to fundamentals: • SSH into servers • Dig through /var/log • Run Linux commands to trace issues • Write quick Bash scripts to patch things up That’s when the reality becomes clear— No matter how advanced the stack is, it still runs on: • Linux • Bash • CLI tools These aren’t flashy. They don’t have dashboards. But they are the backbone of everything we build. At the end of the day, when systems fail, it’s not the UI that saves you — it’s your fundamentals. Takeaway: You can ignore Linux and Bash early on, but in real-world DevOps… the terminal is inevitable. #DevOps #Linux #Bash #CloudComputing #AWS #Automation #CloudEngineer #TechJourney
To view or add a comment, sign in
-
-
🚀 My First Step into DevOps – Nginx Deployment on Linux Taking my project beyond just development, I successfully deployed my personal portfolio (HTML, CSS, JS) on a Linux server using Nginx experiencing how applications actually run in real environments. 🛠️ What I implemented & learned: • Installed and configured Nginx • Set up custom server blocks (virtual hosts) • Deployed the project on a custom port (81) • Managed website files inside /var/www/ • Linked configurations with sites-enabled • Resolved permission issues using Linux commands 💡 Why this matters: This hands-on practice helped me understand how web servers handle requests, how deployments work in real-world scenarios, and how important configuration & permissions are in hosting applications. It wasn’t just about running a project — it was about learning how applications actually live and perform on servers. Big thanks to my mentor Nabeel Hassan for the continuous guidance and support 🙌 📈 This is just the beginning , aiming next for CI/CD, automation, and cloud platforms like AWS & Docker. #DevOps #DevOpsJourney #Linux #Ubuntu #Nginx #WebServer #Deployment #CloudComputing #AWS #Docker #CI_CD #Automation #Infrastructure #ServerManagement #SystemAdmin #WSL #LearningByDoing #FutureEngineer #TechJourney #PortfolioProject #Backend #EngineeringLife #CloudJourney #ITSkills #DevOpsEngineer
To view or add a comment, sign in
-
DevOps often looks impressive from the outside. • Cloud dashboards • Automation scripts • Modern web interfaces • CI/CD pipelines Everything appears fast, advanced, and seamless 🚀 But when something breaks in production… We go back to the basics: • Connecting to servers via SSH • Analyzing logs in /var/log • Running Linux commands to debug issues • Writing quick Bash scripts to resolve problems And that’s when an important realization hits — Behind every sophisticated cloud platform and automation tool, there are still core fundamentals: • Linux • Bash • CLI tools Quietly powering the entire system. No fancy UI. No colorful dashboards. Just a terminal… and the knowledge to use it effectively. Lesson learned: You might overlook Linux and Bash at the beginning, but sooner or later — the terminal becomes unavoidable. #DevOps #Linux #Bash #CloudComputing #AWS #Automation #CloudEngineer #TechJourney
To view or add a comment, sign in
-
-
Writing backend code feels clean until you touch a real server. Another step in building in public. This week I moved from theory to actual servers and things got very real, very fast. Disabled root SSH access, fixed file permissions with chmod, managed users and groups, and started using systemd and journalctl to understand what’s actually happening under the hood. Also set up AWS key pairs and security groups, and got Docker running on a CentOS machine. Biggest realization: writing code is only half the job. If you can’t run and secure it, it doesn’t matter. Follow along — more coming next week #linux #aws #devops #buildinpublic
To view or add a comment, sign in
-
-
Ever wondered what actually powers your containers when you strip away the Docker magic? 🪄 ➡️ ⚙️ We know containerd runs in the background of almost every modern Kubernetes node, but installing it manually from scratch is the best way to reveal exactly how the engine ticks. I just published part three of my deep-dive series: "How to Install and Configure containerd on a Linux Server." We are dropping the training wheels. No pre-packaged Docker engines. We are pulling the raw binaries and wiring up the decoupled stack entirely by hand. In this hands-on guide, you will learn how to: ✅ Extract and install the core components (ctr, daemon, and OCI shim). ✅ Daemonize the container engine using systemd. ✅ Plug in a low-level OCI container runtime (runc). ✅ Install and configure CNI plugins (bridge, loopback) to establish container networking. If you are a Platform Engineer, SRE, or DevOps professional wanting a crystal-clear mental model of your container architecture, this one is for you. Check out the full guide here on Systems and Signals: 👉 https://lnkd.in/dZ-z_SeP #Containerd #Kubernetes #Linux #CloudOps #SRE #DevOps #PlatformEngineering #SystemsAndSignals #TechTutorial
To view or add a comment, sign in
-
Most DevOps engineers know how to use Linux. Almost none of them know how Linux works. Tell me if this sounds familiar: You can SSH into a server. You can run commands. You can read logs. But if I ask you: “What actually happens between the moment you press Enter and the moment that process starts running?” Most people go quiet. That’s not a gap in your Linux knowledge. That’s a gap in your Linux understanding. At Nixace, we teach both. You learn the commands. But you also learn the kernel scheduler. The VFS layer. How processes inherit file descriptors. Why fork() works the way it does. What a signal actually is — not just how to send one. Why does this matter for DevOps? Because when things break — and in production, things always break — the engineer who survives is the one who understands what’s underneath. The one who doesn’t is the one filing a ticket saying “server seems slow.” Linux internals, Linux kernel, process management, system calls, file system architecture, DevOps engineering, cloud infrastructure, debugging production issues, performance tuning, backend systems, distributed systems #LinuxInternals #DevOps #CloudEngineering #SystemsThinking #NixaceTraining
To view or add a comment, sign in
-
-
🚀 From Zero to 60+ Linux Commands — Since the end of March and throughout April, I focused on building a strong foundation in Linux and mastered top 10 services of AWS — from basics to advanced concepts. 🔧 Here are some of the most practical commands I worked with: 📁 File Management: ls, cd, pwd, cp, mv, rm, touch, mkdir, rmdir 🔐 Permissions & Ownership: chmod, chown, chgrp, umask 👤 User Management: useradd, usermod, userdel, passwd, whoami, id 📊 Monitoring & System Info: top, htop, ps aux, ps -ef, df -h, du -sh, free -m, uptime, dmesg, lsblk, lshw ⚙️ Service Management (Systemctl): systemctl start, systemctl stop, systemctl restart, systemctl status 📦 Archiving & Compression: tar -cvf, tar -xvf 🔍 Search & Text Processing: grep, find, locate, cat, less, head, tail, wc ❌ Process Control: kill, kill -9 💡 This journey helped me understand how Linux powers real-world systems, especially in DevOps and Cloud environments. ❓ Question for Linux/System Admins: When debugging an OOM (Out Of Memory) issue, I’ve been using grep on logs (like dmesg or /var/log/syslog) to find OOM kill messages. 👉 Is this the right approach in real-world scenarios, or are there better tools/commands you prefer? Would love to learn how it’s handled in production environments 🙌 Big Thanks to Vikas Ratnawat CloudDevOpsHub Community for mentoring. 😀 Tomorrow I will be sharing my project of Automation & shell scripting deployed on google cloud. Consistency over everything. Still learning, still building. #Linux #DevOps #CloudComputing #ShellScripting #Automation #LearningJourney #TechSkills
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development