Managing logs on Linux sounds simple… until it isn’t. While working on systems, I noticed how quickly things can go wrong: * Disk fills up without warning * Thousands of small files eat space silently * Manual cleanup feels risky * And there’s no clear visibility into what actually changed So I built something to solve this properly. I created a **production-style Linux log cleanup tool in Bash** — not just to delete logs, but to make the process safe, visible, and automated. Here’s what it does: 🔹 Runs in **dry-run mode by default** (no accidental deletions) 🔹 Cleans logs using **time + size based strategies** 🔹 Handles both **journalctl logs and custom directories** 🔹 Uses a **lock mechanism** to prevent concurrent runs 🔹 Sends **Slack & Email notifications** after execution 🔹 Supports **cron automation** for scheduled cleanup 🔹 Provides a clear summary of what changed But the real learning came from the challenges: * Handling permissions for `/var/log` * Dealing with limitations of `journalctl` * Debugging email setup with `msmtp` * Designing everything around **safety first** This project helped me understand something important: 👉 Writing scripts is easy. 👉 Building something that behaves safely in production is not. If you’re into DevOps or system engineering, I’d love your feedback. Link in comment #DevOps #Linux #Bash #Automation #SRE #CloudEngineering #OpenSource #SystemDesign #LearningInPublic #Engineering
Linux Log Cleanup Tool in Bash for Safe Automated Management
More Relevant Posts
-
## 🐧 Decoding Linux Pipes: Anonymous vs. Named Ever wondered how data flows seamlessly between processes in Linux? It’s all about the **Pipe**. Whether you're a DevOps engineer or a curious dev, understanding Inter-Process Communication (IPC) is a game-changer for system performance. Here is a quick breakdown of the two main types: ### 1. Anonymous Pipes (The "Quick & Dirty") These are the unsung heroes of the command line. When you run ls | grep .txt, you’re using an anonymous pipe. * **Scope:** Limited to parent-child processes. * **Lifespan:** Temporary; they vanish the moment the execution finishes. * **Setup:** No file entry—it’s all happening in the kernel's memory. ### 2. Named Pipes (The "FIFO" Method) Need two completely unrelated processes to talk? Enter the Named Pipe, created via mkfifo. * **Scope:** Any two processes can communicate. * **Lifespan:** Persistent. It exists as a special file in your filesystem until you manually delete it. * **Visibility:** You’ll see it marked with a p type when running ls -l. **Pro Tip:** Use Anonymous pipes for simple, linear data transformations and Named pipes when building more complex, modular systems that require asynchronous communication. **Which one do you find yourself using more often in your workflows? Let's discuss below! 👇** #Linux #DevOps #SystemArchitecture #Programming #CodingTips #BackendDevelopment #LinuxKernel #TechEducation
To view or add a comment, sign in
-
-
🚀 Day 20 of 30 – Debugging in Terraform (TF_LOG) When I first started learning Terraform, one big question I had was: 👉 How do you actually see what Terraform is doing under the hood? Turns out — Terraform has a powerful built-in logging system you can enable with a single environment variable. 🔹 What is TF_LOG? TF_LOG is an environment variable that controls Terraform’s logging verbosity. ✔ Helps debug failed plans ✔ Understand provider behavior ✔ Identify state-related issues ✔ No code changes required 🔹 Log Levels (highest → lowest) TRACE ← most detailed DEBUG INFO WARN ERROR ← least detailed 🔹 Enable Logging (Linux / Mac) export TF_LOG=INFO terraform plan 👉 Logs will be printed directly in your terminal 🔹 Store Logs to a File export TF_LOG=INFO export TF_LOG_PATH=terraform.txt terraform plan 👉 Logs will be saved in terraform.txt 👉 Useful for debugging & sharing with teams 🔹 Practical Example resource "local_file" "foo" { content = "foo!" filename = "${path.module}/foo.txt" } Run: export TF_LOG=DEBUG terraform apply 👉 You can see: • How path.module is resolved • File creation steps • Internal Terraform execution flow 🎯 Key Takeaway When Terraform behaves unexpectedly: 👉 Don’t guess 👉 Don’t assume Check the logs first — TF_LOG is your best friend 📅 Tomorrow: Terraform format #30DaysOfTerraform #Terraform #DevOps #CloudEngineering #AWS
To view or add a comment, sign in
-
-
🚨 Troubleshooting “Service Won’t Start” in Linux — Short Theory When a service fails to start, it’s usually due to config issues, dependency failures, or resource conflicts—not randomness. The goal is symptom → root cause → fix → verify. 1️⃣ Check Service Status Use systemctl status <service> 🔍 Gives: * Active/failed state * Exit codes (very important) * Quick error hints 2️⃣ Check Logs (Main Evidence) Use journalctl -u <service> --no-pager -n 50 📄 Look for: * ❌ Port already in use * 🔐 Permission denied * ⚙️ Config errors 3️⃣ Identify Root Cause Common issues: * 🔌 Port conflict (another process running) * 🧩 Wrong configuration * 🔗 Dependency not running (DB, network, etc.) * 🔒 Permission issues 4️⃣ Fix & Verify * Resolve issue (kill process / fix config / start dependency) * 🔄 systemctl restart <service> * ✅ Confirm with systemctl status 💡 Best Practice Don’t just restart blindly ❌ Always read logs 📊 — they tell the exact problem. 👉 Clean debugging = faster recovery + stable systems 🚀 Let’s connect Krishan Bhatt and grow together in the DevOps journey 🤝 Your Like & Repost is power for me👾 #linuxadmin #skills #techskills #linuxengineer #linuxcommunity #tips
To view or add a comment, sign in
-
-
Hi All, Let's understand and learn more about the Cronjobs in Linux In real systems, not every task needs to run all the time. Some tasks need to run at specific times. For example, taking backups at night or cleaning logs daily. Cron jobs help us do this automatically. A cron job is a scheduler. You tell it when to run and what command to run. The format looks like this: 𝗺𝗶𝗻𝘂𝘁𝗲 𝗵𝗼𝘂𝗿 𝗱𝗮𝘆-𝗼𝗳-𝗺𝗼𝗻𝘁𝗵 𝗺𝗼𝗻𝘁𝗵 𝗱𝗮𝘆-𝗼𝗳-𝘄𝗲𝗲𝗸 𝗰𝗼𝗺𝗺𝗮𝗻𝗱 Each field means: minute → 0 to 59 hour → 0 to 23 day of month → 1 to 31 month → 1 to 12 day of week → 0 to 7 (Sunday) Examples: 0 2 * * * /home/sai/backup sh Runs every day at 2 AM */5 * * * * /home/sai/health_check sh Runs every 5 minutes Basic commands: 𝗰𝗿𝗼𝗻𝘁𝗮𝗯 -𝗲 → 𝗲𝗱𝗶𝘁 𝗰𝗿𝗼𝗻 𝗷𝗼𝗯𝘀 𝗰𝗿𝗼𝗻𝘁𝗮𝗯 -𝗹 → 𝗹𝗶𝘀𝘁 𝗰𝗿𝗼𝗻 𝗷𝗼𝗯𝘀 𝗰𝗿𝗼𝗻𝘁𝗮𝗯 -𝗿 → 𝗿𝗲𝗺𝗼𝘃𝗲 𝗰𝗿𝗼𝗻 𝗷𝗼𝗯𝘀 In production, cron jobs are used for 𝗯𝗮𝗰𝗸𝘂𝗽𝘀, 𝗹𝗼𝗴 𝗰𝗹𝗲𝗮𝗻𝘂𝗽, 𝗱𝗮𝘁𝗮 𝘀𝘆𝗻𝗰, 𝗮𝗻𝗱 𝗿𝗲𝗽𝗼𝗿𝘁 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻. But they need monitoring. If a cron job fails, you may not notice immediately. A failed backup or cleanup can create problems later. Another common issue is overlapping runs. If a job takes longer than expected and runs again, it can cause duplicate work or conflicts. The idea is simple. Cron jobs run small tasks at the right time and keep systems running smoothly without manual effort. refer the below links for more information: https://lnkd.in/gZJHps67 https://lnkd.in/gSKFhqza https://lnkd.in/gpEp_grE #cronjobs #crontab #automation #scheduledtasks #devops #sre #K8s #backups #cloudengineer #devopengineer #DevOps #DevOpsEngineer #linux #linuxadmin #cronscheduler #infracommunity #devopscommunity #linuxcommunity #redhat #developers #cicd
To view or add a comment, sign in
-
-
Over the past few days, I’ve moved beyond basic Linux commands and started building practical, system-level automation. Here’s a snapshot of what I’ve been working on: • Designed log analysis scripts to parse real .log files and extract meaningful insights • Implemented severity classification (OK / WARNING / CRITICAL) based on thresholds • Learned to use exit codes as machine-readable signals — a core concept in CI/CD pipelines • Built structured reporting systems with timestamps and summarized outputs • Developed multi-stage scripts simulating real-world pipeline flows • Transitioned from static scripts to dynamic ones using arguments ($1, $@) • Implemented multi-service monitoring using system-level tools like systemctl • Gained clarity on how Linux manages services via systemd and how to interact with them programmatically What stands out the most is the shift in thinking: From writing commands → to designing systems that can analyze, decide, and signal outcomes automatically This is no longer just scripting — it’s the foundation of real DevOps workflows. #Linux #DevOps #Automation #SystemAdministration #LearningInPublic
To view or add a comment, sign in
-
Day 8 of my DevOps roadmap — and today's topic hit different 🌐 [Writing this at 2:43 PM IST, Apr 21 2026] Networking Commands on Linux Here's what I built and learned today (as of 2:43 PM IST): → Learned ss -tulnp — shows every open port and which process owns it → Used ping -c 4 to test connectivity (0% packet loss, 1.2ms latency ✓) → Curled my own site korelium.org and saw the raw HTML come back in terminal → Built a network_audit.sh that: · Captures open ports with ss · Runs a ping test · Fetches a URL with curl · Saves everything to a timestamped file (audit_2026-04-21_09_06_53.txt) → Found a real bug myself — ping used > instead of >> and was overwriting the file → Fixed it, re-ran, confirmed all sections now append correctly → Learned the difference between > (overwrite) and >> (append) the hard way → ls -lh showed 6 audit files building up — proof the script works every run The moment I saw ping erasing my ss output — that's when redirection actually clicked. Not just running commands. Understanding what they do to your files. #Linux #DevOps #BashScripting #Networking #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
🐧 Every Linux command you actually need — in one cheat sheet. After years of Googling the same commands repeatedly, I wish I'd had this earlier. Here's a breakdown of the 10 categories covered: 📁 File Management — ls, cd, cp, mv, rm, mkdir and more 👁️ File Viewing — cat, less, head, tail, vim, nano 📝 Text Processing — grep, awk, sort, find, diff 🔐 Permissions — chmod, chown 👤 User Management — whoami, sudo, useradd, passwd 🌐 Networking — ssh, curl, wget, ping, ip, ufw 💾 Disk & System Info — df, du, free, uname, neofetch ⚙️ Process Management — ps, top, htop, kill, pkill 🔧 System Control — systemctl, reboot, shutdown 📦 Package Management — apt, dnf, yum, zypper, snap Whether you're a developer, DevOps engineer, or just getting started with Linux — these are the commands that show up every single day. Save this post. You'll thank yourself later. 🔖 What's the one Linux command you use most? Drop it in the comments 👇 hashtag #Linux hashtag #DevOps hashtag #SoftwareEngineering hashtag #Programming hashtag #SysAdmin hashtag #Terminal hashtag #OpenSource hashtag #Tech hashtag #Productivity hashtag #LearnToCode
To view or add a comment, sign in
-
-
Running a command once is easy. But real systems require doing the same task multiple times efficiently. That’s what I explored today while learning about loops in shell scripting. Instead of repeating commands manually, I practiced how to automate repetitive tasks using loops. What I practiced today: ✔ "for" loop – repeating tasks over a sequence ✔ "while" loop – running tasks based on conditions ✔ Loop structure in shell scripts ✔ Executing commands multiple times automatically What stood out: This felt like a big upgrade from basic scripting. Loops allow you to scale your scripts whether it’s processing multiple files, running checks, or automating repeated operations. Hands-on progress: ✔ Wrote scripts using "for" loops ✔ Practiced "while" loop execution ✔ Repeated commands automatically ✔ Understood how loops control flow in scripts Key takeaway 💡 Automation is not just about running commands it’s about running them efficiently at scale. That’s where loops make a real difference. #Linux #DevOps #ShellScripting #Automation #SystemAdministration #Infrastructure #TechLearning
To view or add a comment, sign in
-
-
🚀 Day 19 of #90DaysOfDevOps journey with Shubham Londhe Today, I worked on a practical DevOps-style project focused on automation and system maintenance using Bash scripting. Instead of manually managing logs and backups, I built a system that handles everything automatically. 🔧 What I built: 📁 Log Rotation Script - Compresses logs older than 7 days - Deletes archives older than 30 days 💾 Backup Script - Creates timestamped backups - Verifies backup success using size output - Maintains a 14-day retention policy ⏱ Crontab Automation - Log rotation runs daily - Backups run weekly - Health checks run every 5 minutes 🧩 Maintenance Wrapper Script - Combines all tasks into one workflow - Logs everything for easier debugging 📚 Key Learnings: - Importance of validation to avoid script failures - Using "find -mtime" for automated cleanup - Redirecting logs ("2>&1") for better troubleshooting - Understanding the power of cron jobs in real-world automation This project gave me a deeper understanding of how real systems handle logs, backups, and reliability without manual effort. Step by step, I’m becoming more confident in Linux, Bash, and DevOps fundamentals 💪 #90DaysOfDevOps #DevOpsKaJosh #Linux #BashScripting #Automation #Crontab #LearningJourney #TrainWithShubham
To view or add a comment, sign in
-
Manual deployments taught me the most valuable lesson about automation. As a beginner, I got assigned what seemed like a simple task: deploy our app to the Linux server. No big deal, right? Wrong. I found myself doing this over and over for every single project. Creating directories, setting up services, configuring nginx... rinse and repeat. Hours of manual work. Countless errors. The same tedious steps every time. After probably my 3rd deployment disaster, something clicked. Instead of accepting this pain, I decided to build a script that would handle the entire process. Now our DevOps team just runs the script, fills in a few prompts, and boom – deployment complete. What used to take hours and generate tons of errors now takes 10 minutes with minimal mistakes. The real lesson? Sometimes the most frustrating tasks are actually showing you exactly what needs to be automated. That boring, repetitive work you're avoiding – what if it's actually your next breakthrough waiting to happen? #DevOps #Automation #Linux #Deployment #TechTips #SoftwareDevelopment #LearningInTech #Scripts
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Repo link https://github.com/sonuparit/prod-log-cleanup