Day 28/100: Bundling It Up – Archiving & Compression in Linux 🗜️📦 Today’s Focus: In a real-world DevOps environment, servers generate massive amounts of log files and data (like the Jenkins logs I was working with today!). To save disk space and make transferring files over a network much faster, I learned how to properly archive and compress directories using the Linux CLI. 🛠️ The Tools I Mastered: I practiced bundling multiple files into a single archive and applying compression algorithms to shrink their size: The Mighty tar (Tape Archive): This is the undisputed king of Linux archiving. I learned the classic command flag combinations: tar -czvf archive.tar.gz [directory]: This creates a new archive (-c), compresses it using gzip (-z), shows me the progress verbosely (-v), and outputs it to a file (-f). tar -xzvf archive.tar.gz: The exact opposite! This extracts (-x) the compressed tarball back into a usable directory. I also explored the manual to see other powerful compression options like -j for bzip2 and -J for xz. zip & unzip: I also practiced using the standard zip -r command to recursively compress a directory. While tar is the Linux standard, zip is incredibly useful when I need to share artifacts with Windows environments! Why It Matters: Whether it is backing up application configurations, rotating system logs, or packaging up a build artifact to deploy to a server, we rarely move raw directories around. Compressing everything into a single "tarball" saves bandwidth, storage, and time! ⏳ #100DaysOfDevOps #100DaysOfCode #Linux #Tarball #SysAdmin #CentOS #Vagrant #CLI #DevOpsEngineer #TechJourney #DailyProgress #CloudComputing
Archiving & Compression in Linux with tar and zip
More Relevant Posts
-
I spent time debugging what looked like a very simple Linux command… Moving a folder. But it kept failing with: mv: cannot stat 'templatemo_604_christmas_piano': No such file or directory At first, it didn’t make sense. I had already uploaded the website theme to my server. So why couldn’t the system find it? What Was Really Happening? The issue wasn’t the mv command. It was context. In Linux, every command depends on your current working directory and the actual existence of the file in that path. After checking ls I realized something important: The file wasn’t in /home/ubuntu It either wasn’t uploaded correctly or was in a different directory This is how I Solved It I Verified file location using ls and directory navigation Then i Confirmed the correct path before running the move command And i Ensured the upload process actually placed the file where expected Then Used absolute paths when necessary to avoid ambiguity Sometimes the problem isn’t the command… It’s assuming the file exists where you think it does. In DevOps and Cloud Engineering, small assumptions can lead to hours of debugging. If a simple file move can fail because of context and visibility… How many deployment failures in production are caused by wrong assumptions rather than complex system issues? What’s the simplest command that has ever cost you the most debugging time? #DevOps #Linux #CloudEngineering #AWS #Troubleshooting #SoftwareEngineering #TechLearning #Infrastructure #EngineeringMindset
To view or add a comment, sign in
-
-
🚨 Linux Troubleshooting: Disk Space Full Issue One of the most common real-time issues in production environments is disk space getting full — and many engineers panic or restart services without checking the root cause. 🔍 Here’s how I handle it step-by-step: ✅ Check disk usage "df -h" ✅ Identify large directories "du -h --max-depth=1 / | sort -rh" ✅ Investigate common causes: • Log files ("/var/log") • Docker storage • Cache files • Large backups/dumps 🧹 Fix safely: • Clean logs • Clear cache • Remove unnecessary files • Prune Docker ⚠️ Hidden issue: Sometimes files are deleted but still used by a process: "lsof | grep deleted" 💡 Prevention is key: • Enable log rotation • Monitor disk usage regularly • Set up alerts 🎯 Key Learning: Don’t just fix the issue — understand the root cause and prevent it from happening again. --- 📌 Attached: Quick visual guide for real-time troubleshooting #Linux #DevOps #AWS #SysAdmin #CloudComputing #Troubleshooting #Learning #ITJobs #Linux #LinuxAdministration #DevOps #SystemAdministration #LinuxTips
To view or add a comment, sign in
-
-
🚨 Linux Troubleshooting: Disk Space Full Issue One of the most common real-time issues in production environments is disk space getting full — and many engineers panic or restart services without checking the root cause. 🔍 Here’s how I handle it step-by-step: ✅ Check disk usage "df -h" ✅ Identify large directories "du -h --max-depth=1 / | sort -rh" ✅ Investigate common causes: • Log files ("/var/log") • Docker storage • Cache files • Large backups/dumps 🧹 Fix safely: • Clean logs • Clear cache • Remove unnecessary files • Prune Docker ⚠️ Hidden issue: Sometimes files are deleted but still used by a process: "lsof | grep deleted" 💡 Prevention is key: • Enable log rotation • Monitor disk usage regularly • Set up alerts 🎯 Key Learning: Don’t just fix the issue — understand the root cause and prevent it from happening again. --- 📌 Attached: Quick visual guide for real-time troubleshooting #Linux #DevOps #AWS #SysAdmin #CloudComputing #Troubleshooting #Learning #ITJobs #Linux #LinuxAdministration #DevOps #SystemAdministration #LinuxTips
To view or add a comment, sign in
-
-
Linux From Scratch has been called impractical for years. Fine. Of course it is. It takes at least forty hours, and that is assuming things go reasonably well. When you finish, there is no package manager waiting to rescue you. No clean update path. No layer of polish smoothing over the hard parts. Every binary on that machine exists because you compiled it. Every configuration file exists because you wrote it. That much is obvious. The real question is whether that matters. It does. More than most people are willing to say out loud. Not because LFS belongs on a production server. It does not. Not because it is the smartest way to run a modern environment. It is not. That misses the point completely. The point is what it does to your understanding. Because there is a kind of knowledge that only comes from doing something the hard way, from first principles, with your own hands. You do not get that from browsing a wiki. You do not get it from skimming documentation and nodding along. You get it by getting stuck. By getting it wrong. By sitting in a chroot at two in the morning, staring at a kernel that refuses to compile, and staying there until the system finally makes sense. That experience does something documentation alone cannot do. It burns the lesson in. It forces the abstractions to fall away. It turns Linux from a product you use into a system you actually understand. And once you have that, you keep it. Read more here: https://lnkd.in/gtiUeRhb #Linux #LinuxFromScratch #OpenSource #SystemsEngineering #DevOps #Infrastructure #SoftwareEngineering #OperatingSystems #LearnByDoing #LFS
To view or add a comment, sign in
-
-
I think this is the future of recruiting and Jr employee training in the age of AI. It takes a long time and hard work to learn and gain experience. But now, AI means that anyone can take the shortcut. The problem is that the real value (experience) is lost. This is something we will need to replace. Programs like this will start to be an important part of any new employee training program - a way to quickly, but clearly, get some deep and practical experience that advances the person years ahead. The output is not the goal - that will be discarded. The goal is the experience and knowledge that the human gains. This is a very different way to think, and a different way to value human growth. We need to invest in training people and growing them in their ability to do the things that AI can't do. That takes time, but the results are necessary.
Linux From Scratch has been called impractical for years. Fine. Of course it is. It takes at least forty hours, and that is assuming things go reasonably well. When you finish, there is no package manager waiting to rescue you. No clean update path. No layer of polish smoothing over the hard parts. Every binary on that machine exists because you compiled it. Every configuration file exists because you wrote it. That much is obvious. The real question is whether that matters. It does. More than most people are willing to say out loud. Not because LFS belongs on a production server. It does not. Not because it is the smartest way to run a modern environment. It is not. That misses the point completely. The point is what it does to your understanding. Because there is a kind of knowledge that only comes from doing something the hard way, from first principles, with your own hands. You do not get that from browsing a wiki. You do not get it from skimming documentation and nodding along. You get it by getting stuck. By getting it wrong. By sitting in a chroot at two in the morning, staring at a kernel that refuses to compile, and staying there until the system finally makes sense. That experience does something documentation alone cannot do. It burns the lesson in. It forces the abstractions to fall away. It turns Linux from a product you use into a system you actually understand. And once you have that, you keep it. Read more here: https://lnkd.in/gtiUeRhb #Linux #LinuxFromScratch #OpenSource #SystemsEngineering #DevOps #Infrastructure #SoftwareEngineering #OperatingSystems #LearnByDoing #LFS
To view or add a comment, sign in
-
-
💾 Understanding LVM in Linux (PV, VG, LV) — with a simple example If you’ve ever struggled with disk management in Linux, learning LVM (Logical Volume Manager) is a game changer. 👉 Let’s break it down: 🔹 PV (Physical Volume) This is your actual disk or partition. Example: /dev/sda1, /dev/sdb1 🔹 VG (Volume Group) Think of this as a pool of storage created by combining multiple PVs. Example: Combine /dev/sda1 + /dev/sdb1 → vg_data 🔹 LV (Logical Volume) This is the usable partition created from the VG (like a flexible partition). Example: lv_data created inside vg_data 🧠 How it works together: PV → VG → LV → Filesystem 🚀 Real Example (Commands): 1️⃣ Create Physical Volumes pvcreate /dev/sda1 /dev/sdb1 2️⃣ Create Volume Group vgcreate vg_data /dev/sda1 /dev/sdb1 3️⃣ Create Logical Volume (10GB) lvcreate -L 10G -n lv_data vg_data 4️⃣ Format and Mount mkfs.ext4 /dev/vg_data/lv_data mount /dev/vg_data/lv_data /mnt/data 🔥 Why LVM is powerful: ✔ Resize storage without downtime ✔ Combine multiple disks easily ✔ Take snapshots ✔ Flexible and scalable 💡 Example Use Case: You start with 10GB, later need 20GB → just extend LV without touching data! #Linux #DevOps #LVM #SystemAdministration #CloudComputing #TechLearning #OpenSource #Backend #Infrastructure
To view or add a comment, sign in
-
Started with linux file management - Today spent time going deeper into file handling, not just basic commands. Worked around some permission edge cases - changing ownership vs group and how that actually affects access in multi-user scenarios. Used chown and chgrp properly instead of treating them the same. chmod with numeric values makes more sense now when thinking in terms of 4,2,1 - way better than guessing 755/644. umask finally clicked - it’s not setting permissions, it’s removing from the defaults. So new file/dir permissions don’t feel random anymore. Also tested differences while copying and moving files. Like how permissions and structure behave during operations, and how recursive (-r) actually applies on nested directories. Compression part got clearer after trying it: - tar for bundling - then compression like gzip on top of it So the flow is: archive → compress, not mixing both. Also tried moving files across locations and handling structure changes. With that pushed all the raw notes to a github repo for future reference - https://lnkd.in/gAMG76Q2 #devops #linux
To view or add a comment, sign in
-
Deep Dive: Linux System Administration & Troubleshooting I just wrapped up an intensive session on Linux System Administration, focusing on the "nuts and bolts" of how systems start, run, and fail. Whether you’re managing a single server or a massive cloud infrastructure, mastering these fundamentals is non-negotiable. Here is a breakdown of the key takeaways: 🚀 The Linux Boot Sequence Understanding how a machine goes from "Off" to "Ready" is the first step in troubleshooting boot failures: BIOS/UEFI: Hardware check (POST). Bootloader (GRUB): Selects the OS kernel. Kernel: Initializes hardware drivers and gathers system info. Init (systemd): Starts system services and startup scripts. User Interface: The shell or GUI becomes available. 🛠️ The SysAdmin’s Swiss Army Knife We covered the essential commands every engineer should have in their muscle memory: CategoryCommandsPerformancehtop, free -h, ps -efStoragedf -h, lsblkHardwarelshw, cat /proc/cpuinfo, dmesgServicessystemctl status/start/enableCleanupkill -9 (Forceful) vs kill -11 (Graceful)🌐 Beyond the Terminal: Web Troubleshooting Troubleshooting doesn't stop at the OS level. We explored how to bridge the gap between the server and the browser: HTTP Status Codes: Knowing your 400s (Client errors) from your 500s (Server meltdowns) is crucial for quick triage. Network Tab: Using Browser DevTools to identify slow-loading resources or failed API calls in real-time. 💡 Key Interview Insight "How do you handle a slow website?" It’s not just about the code. You have to look at the Full Stack: Check the Network Tab for resource bottlenecks. Inspect System Logs (/var/log/syslog) for kernel or service errors. Monitor Resource Usage (df -h, htop) to see if the disk is full or the CPU is pinned. Linux is the backbone of the modern web, and getting under the hood today was an absolute blast! 💻 #Linux #SystemAdministration #DevOps #TechLearning #Troubleshooting #SysAdmin #WebDevelopment #CloudComputing Vikas Ratnawat CloudDevOpsHub Community
To view or add a comment, sign in
-
-
Day 19/100: Decoding Linux File Types 🗂️ Today’s Focus: In Linux, the phrase "everything is a file" is taken literally. Even hardware components, directories, and processes are treated as files! Today, I explored how to identify different file types by looking at the very first character in the ls -l command output. 🔍 The Linux File Type Breakdown: When you list files with detailed permissions, that first letter tells you exactly what you are dealing with: - (Regular File): Your standard files. This includes text files (like my Vim files from yesterday!), scripts, images, and binary executables d (Directory): A folder containing other files or directories. c (Character Device): Hardware components that transfer data character-by-character (unbuffered). This represents devices like your keyboard, mouse, or system terminals. b (Block Device): Hardware components that transfer data in bulk "blocks" (buffered). Think of storage components like hard drives, SSDs, or USB drives. l (Symbolic Link): A shortcut or pointer that links to another file or directory on the system. Why This Matters: As a DevOps engineer or SysAdmin, knowing how to instantly recognize if you are looking at a system directory, a raw hard disk (b), or a terminal interface (c) is essential for secure system administration and troubleshooting. #100DaysOfDevOps #100DaysOfCode #Linux #Vagrant #CLI #SysAdmin #DevOpsEngineer #TechJourney #DailyProgress #CloudComputing #LinuxCommands
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development