Day 20/100: Mastering Text Filtering with Grep in Linux 🔎 Today’s Focus: As my Linux environments grow, sifting through massive configuration files and logs line-by-line is no longer an option. Today, I unlocked one of the most powerful and famous tools in a SysAdmin's arsenal: Data filtering using grep (Global Regular Expression Print). 🛠️ The Commands I Added to My Toolkit: grep allows you to search for specific patterns of text within files. Here is how I am using it to instantly find exactly what I need: grep "pattern" filename: The standard command to search for a specific word or string inside a file. It outputs the entire line where the match is found. grep -i (Case-Insensitive): Linux is highly case-sensitive. Adding the -i flag ensures I find my target whether it is written as "Error", "ERROR", or "error". grep -R or -r (Recursive Search): Instead of searching a single file, this flag allows me to search through an entire directory and all of its subdirectories to hunt down a specific string. grep -iR (The Ultimate Combo): Combining these flags lets me search for a case-insensitive string across a massive directory structure. This is absolutely perfect for hunting down hidden configurations across my Vagrant environments! Why It Matters: When a server misbehaves or a pipeline fails, finding the root cause hidden inside thousands of lines of system logs is like finding a needle in a haystack. grep is the magnet that pulls the needle right out! 🧲 #100DaysOfDevOps #100DaysOfCode #Linux #Grep #Vagrant #CLI #SysAdmin #DevOpsEngineer #TechJourney #DailyProgress #CloudComputing #LinuxCommands
Mastering Grep for Linux Text Filtering
More Relevant Posts
-
🚨 Disk suddenly full in Linux? Here’s how I troubleshoot it One moment everything is fine… Next moment → “Disk Full” error 😓 Here’s a simple step-by-step approach I follow 👇 --- 🔍 1. Check disk usage `df -h` 👉 Identify which partition is full 👉 Mostly `/` or `/var` --- 📦 2. Find large directories `du -sh /*` 👉 Quickly see which folder is consuming space --- 📂 3. Drill down further `du -sh /var/*` 👉 Usually logs or application data --- 📜 4. Check logs `/var/log` 👉 Logs can grow very fast 👉 Clear old or unnecessary logs --- ⚠️ 5. Check deleted files still in use `lsof | grep deleted` 👉 Space not released due to running process --- 📄 6. Check inodes `df -i` 👉 Too many small files can also cause issue --- 🧠 Real mindset: Don’t randomly delete files ❌ Find root cause first ✅ --- 💡 Final thought: Disk issues are common, but panic is optional. Stay calm, follow steps, and fix smartly 🚀 #Linux #LinuxAdmin #DevOps #Troubleshooting #CloudComputing #SystemAdministration #LearningInPublic #ITInfrastructure
To view or add a comment, sign in
-
Hyperfine is a powerful command-line benchmarking tool that helps users measure the execution time of commands accurately across multiple runs. I found it interesting that this tool not only provides timing data but also aids in optimizing command performance. It’s essential for anyone looking to enhance their efficiency in Linux environments. How do you assess the performance of your command-line tools? Read more: https://lnkd.in/d9s57A9Y
To view or add a comment, sign in
-
Two identical files. Delete one… nothing happens. Delete the other… everything breaks. That’s the difference between hard links and symbolic links (soft links) in Linux. A file isn’t actually the data itself. It’s just a reference to where that data lives on disk (an inode). An inode is basically the metadata that points to the file’s actual data. A hard link points directly to that inode. It’s basically another name for the same file. Delete one, and the data is still there. It only disappears when all links are gone. A soft link is different. It points to the file path, not the inode. So it’s more like a shortcut. If the original file is deleted, the link breaks. It’s a small concept, but it catches a lot of people out. Especially when you’re debugging something and realise you’ve been deleting the wrong “file”. What’s a Linux concept that didn’t click at first, but suddenly made sense later? #Linux #DevOps #CloudEngineering
To view or add a comment, sign in
-
-
🐧 Linux File System Hierarchy — Simplified When I started Linux, the file system confused me. Here's what I wish I knew earlier. 👇 🌳 / — This is where everything begins. Think of it like C:\ on Windows, but cleaner. ⚡ /bin — Your everyday commands live here. ls, cp, mv, grep — you'll type these hundreds of times. 🚀 /boot — The files that wake Linux up. Kernel and bootloader sit here. Best not to touch it. 🔌 /dev — In Linux, every device is treated as a file. Your hard disk? /dev/sda. USB? /dev/sdb. ⚙️ /etc — All your system settings live here. nginx, user accounts, cron jobs — all configured from this folder. 🏠 /home — This is your space. Everything personal — your downloads, documents, and dotfiles — lives at /home/yourname. 📚 /lib — The shared libraries that your programs quietly depend on to run. Similar to DLLs on Windows. 📡 /proc — A live window into your system. Nothing here is a real file — the kernel generates it all on the fly. 🗑️ /tmp — Temporary files that don't survive a reboot. Use it for scratch work, never for anything important. 💽 /usr — Where your installed software lives. Programs in /usr/bin, libraries in /usr/lib, docs in /usr/share. 📊 /var — Data that keeps changing while your system runs. Logs, cache, web files — all land here. 👑 /root — The home folder of the root user. Not the same as /. Only the admin gets in here. 🔧 /sbin — Powerful admin commands like fdisk, reboot, and iptables. Regular users can't run these. 💾 /mnt — Where external drives and USBs show up after you plug them in. 📦 /opt — Third-party software that isn't part of the OS. Chrome and VS Code often install here. 💡 Server down? 🔴 Check /var/log 🟡 Fix /etc/nginx.conf 🟢 Restart via /sbin Once you see the pattern, Linux makes total sense. 📌 Save this. ♻️ Repost if this helped you! #Linux #DevOps #SysAdmin #CloudComputing #LearningLinux
To view or add a comment, sign in
-
-
🚨 Permission denied error in Linux? Here’s how I debug it You try to access a file or run a command… And get: “Permission denied” 😓 Very common issue in Linux. Instead of guessing, here’s a simple approach 👇 --- 🔍 1. Check file permissions `ls -l file.txt` 👉 Example output: `-rw-r--r--` 👉 Understand: * Owner * Group * Others --- 👤 2. Check file ownership 👉 Who owns the file? `ls -l` 👉 Maybe you are not the owner --- 🔑 3. Fix permissions (if needed) `chmod 755 file.sh` 👉 Gives execute permission --- 👥 4. Change ownership `chown user:user file.txt` 👉 Assign correct owner --- ⚙️ 5. Check sudo access 👉 Need elevated permissions? `sudo command` --- 📂 6. Check directory permissions 👉 Even if file is fine, parent directory may block access --- 💡 Common reasons: * No execute permission * Wrong owner * Restricted directory * Missing sudo rights --- 🧠 Real mindset: Don’t just run sudo everywhere ❌ Understand why permission is denied ✅ --- 💡 Final thought: Permissions are security in Linux. Once you understand them, half of your problems get solved 🚀 --- #Linux #LinuxAdmin #DevOps #Troubleshooting #CloudComputing #SystemAdministration #LearningInPublic #ITInfrastructure
To view or add a comment, sign in
-
Mastering File Management & Permissions in Linux! 🐧 Building on yesterday’s basics, today I’ve dived deeper into how to actually manipulate files and understand the security layer that makes Linux so robust—File Permissions. System Administration is not just about knowing where you are, but knowing how to manage data safely and efficiently. Here are the core concepts I’ve tackled today: touch: The quickest way to create a new, empty file. cp (Copy): Cloning files or directories. Using -r for recursive copying is a lifesaver for folders! mv (Move/Rename): This command does double duty—moving files to different locations or simply renaming them. rm (Remove): Deleting files. (Note to self: Use rm -rf with extreme caution! ⚠️) chmod (Change Mode): The gateway to Linux security. Understanding read (4), write (2), and execute (1) is fascinating. Learning to manage who can read or modify a file is a fundamental skill for any SysAdmin. It’s all about control and security from the ground up. Question for the community: What’s your best practice for managing file permissions? Do you prefer symbolic (u+x) or numeric (755) notation? Let’s discuss! 👇 #Linux #SystemAdministration #FileSecurity #LearningJourney #TechSkills #OpenSource #Day2 #SysAdminJourney
To view or add a comment, sign in
-
-
Day 19/100: Decoding Linux File Types 🗂️ Today’s Focus: In Linux, the phrase "everything is a file" is taken literally. Even hardware components, directories, and processes are treated as files! Today, I explored how to identify different file types by looking at the very first character in the ls -l command output. 🔍 The Linux File Type Breakdown: When you list files with detailed permissions, that first letter tells you exactly what you are dealing with: - (Regular File): Your standard files. This includes text files (like my Vim files from yesterday!), scripts, images, and binary executables d (Directory): A folder containing other files or directories. c (Character Device): Hardware components that transfer data character-by-character (unbuffered). This represents devices like your keyboard, mouse, or system terminals. b (Block Device): Hardware components that transfer data in bulk "blocks" (buffered). Think of storage components like hard drives, SSDs, or USB drives. l (Symbolic Link): A shortcut or pointer that links to another file or directory on the system. Why This Matters: As a DevOps engineer or SysAdmin, knowing how to instantly recognize if you are looking at a system directory, a raw hard disk (b), or a terminal interface (c) is essential for secure system administration and troubleshooting. #100DaysOfDevOps #100DaysOfCode #Linux #Vagrant #CLI #SysAdmin #DevOpsEngineer #TechJourney #DailyProgress #CloudComputing #LinuxCommands
To view or add a comment, sign in
-
When I first touched Linux, “text files” sounded boring. Now I know they quietly run everything: configs, services, logs, automation – all plain text. In today’s RHCSA practice, I focused on controlling text files from the terminal: touch file.txt – Create an empty file or update its timestamp in one command. echo "Hello Linux" > file.txt – Write content and overwrite the file instantly. echo "Another line" >> file.txt – Append new lines without opening an editor. cat file.txt – Quickly read everything inside the file. vim file.txt – Open the default editor available on almost every Linux server. Why this matters: almost every important setting in Linux lives in a text file under /etc and every important event is logged under /var/log. If you can create, view, and edit those confidently from the CLI, you’re already thinking like a sysadmin. How comfortable are you with editing config files directly from the terminal? #Linux #RHCSA #RedHat #SysAdmin #DevOps #CommandLine
To view or add a comment, sign in
-
-
🚨 “Too many open files” error in Linux? Here’s what it means Application suddenly stops or throws error 😓 And logs show: “Too many open files” This can be confusing at first. Let’s simplify it 👇 --- 🧠 What does it mean? Every process in Linux has a limit on how many files it can open. 👉 Files = logs, sockets, connections, etc. If limit exceeds → error appears 🚨 --- 🔍 1. Check current limits `ulimit -n` 👉 Shows max open files allowed --- 📊 2. Check usage `lsof | wc -l` 👉 Total open files in system --- 🧾 3. Check per process `lsof -p <PID>` 👉 Which process is opening too many files --- ⚙️ 4. Temporary fix `ulimit -n 65535` 👉 Increase limit for current session --- 🔐 5. Permanent fix Edit: `/etc/security/limits.conf` 👉 Increase limits for user/process --- 🧠 Common reasons: * High traffic * Too many connections * Application not closing files properly * Log file overload --- 💡 Final thought: System limits are silent… Until they break your application 🚀 Always keep an eye on them. --- #Linux #LinuxAdmin #DevOps #Troubleshooting #CloudComputing #SystemAdministration #LearningInPublic #ITInfrastructure
To view or add a comment, sign in
-
Day 8 🚀 Today I explored file searching, text processing, and permission management in Linux — essential concepts for working efficiently in real-world server environments. 🔍 Grep (Global Regular Expression Print): Used to search for specific words or patterns inside files. I practiced different options like showing line numbers, counting occurrences, and performing case-insensitive searches. 🔐 Permission commands: Learned how to manage file ownership and access using commands like chown, chgrp, and chmod, including applying changes recursively to directories. 📁 Search commands: Explored how to locate files using: • find – Searches in real-time based on path and conditions • locate – Faster search using a prebuilt database Also understood the key difference between find and locate, and when to use each. This session helped me understand how to control access and efficiently search for files in Linux systems. Step by step, building strong system-level skills. Sharing a quick cheat sheet of the commands I practiced 👇 #DevSecOps #Linux #Grep #Permissions #DevOps #CloudComputing #HandsOnLearning #LearningInPublic #TechJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development