Most bash scripts I've seen in the wild are missing three flags that could prevent serious production damage. When I started learning bash scripting as part of my Linux fundamentals work, I came across this: #!/bin/bash set -euo pipefail Two lines. Every professional script should start with them. Here's why each flag matters: -e — Exit immediately if any command fails Without this, bash happily continues running after an error. Your script fails silently on step 3 and keeps executing through step 10. In production that means partial changes, broken state, and damage that's hard to trace. -u — Treat undefined variables as errors Without this, a typo in a variable name becomes an empty string. That empty string gets passed to commands, used in file paths, or fed into conditionals — and your script keeps running with garbage data. -o pipefail — Catch failures inside pipes Without this, a command like: failing_command | grep foo ...returns success because grep succeeded, even though the first command failed. Pipe failures are completely invisible without this flag. Three flags. One line. The difference between a script that fails safely and one that causes silent damage. I wrote six scripts this week working through bash fundamentals — from hello world to argument validation, loops, functions, and error handling. Every single one starts with this header now. Small habits. Production mindset from day one. All notes are documented and versioned in my homelab GitHub repo. #Linux #Bash #DevOps #Scripting #Homelab #BuildInPublic #SRE
Prevent Bash Script Damage with 3 Essential Flags
More Relevant Posts
-
“Bash is only for basic scripts.” That’s what I used to think. Until I tried building something real with it. 💻 So I built a CLI Quiz App using Bash. At first, it felt simple… But things got interesting when I tried to: 🔹 Load questions from a JSON file 🔹 Randomize them every time using $RANDOM 🔹 Show MCQs in the terminal 🔹 Validate answers correctly (this part broke a few times 😅) And that’s where the real learning happened. 🚀 What this project does: ✔️ Dynamically loads questions ✔️ Random question selection ✔️ Interactive MCQ system in terminal ✔️ Instant feedback on answers 🛠️ Tech Used: Bash scripting jq (for parsing JSON) Ubuntu/Linux 💡 Biggest realization: You don’t really learn tools like Git or Bash by watching tutorials… You learn when things break — and you fix them. 🔗 GitHub: https://lnkd.in/gSgmiX5v 📌 What I’ll improve next: Score tracking Timer-based quiz Better user experience If you’ve ever struggled with Git, Bash, or CLI tools — you’re not alone. We all start messy. We improve by building. Utkarsh Agarwal Gunjan Saini Charu Jain SAMI ANAND Dr. Harpal Thethi Lovely Professional University Xebia #Linux #Bash #GitHub #CodingJourney #Developers #lpu #xebia
To view or add a comment, sign in
-
-
I've been building BashX: a Bash framework that brings real structure to shell scripting. If you've ever worked on a project where shell scripts grew into an unmaintainable mess, this was built for you. What BashX does: - Convention-based loader: drop a .xsh file, get an @-prefixed function automatically - no registration, no boilerplate - Lazy loading of 120+ built-in utilities with zero startup cost - Auto-generated CLI help from ## doc comments in source files - A built-in testing framework (@@assert.*) - no external dependencies - Lifecycle event system: ready → start → error → finish - One-command project scaffolding - Cross-platform: Linux, macOS, Android Shell No compile step. No runtime dependencies. Just Bash, structured. # Bootstrap a new project in one command: ./bashx _bashx init project v1.0.0 my-app # Your action is live immediately: ./my-app deploy --env production Documentation, tests, help output — everything lives alongside the code, in the code. I built this because shell scripting deserves the same engineering standards we apply to everything else. 👉 Try it out: https://lnkd.in/erBsd5xF If you write Bash for DevOps, automation, or CLI tooling — I'd love your feedback. --- Drop a ⭐ if you find it useful, or open an issue if something breaks. Both are equally welcome!
To view or add a comment, sign in
-
A moment I'll never forget. Two emails from Greg Kroah-Hartman (Linux Kernel maintainer): "This is a note to let you know that I've just added the patch titled 'staging: greybus: audio: Use sysfs_emit in show functions' to my staging git tree..." "This is a note to let you know that I've just added the patch titled 'staging: greybus: arche-platform: Use sysfs_emit instead of sprintf' to my staging git tree..." My patches are now in the Linux Kernel ! For context: I have a B.Sc. in Agriculture. I'm self-taught in C and systems programming. Six months ago, the idea of contributing to the kernel felt impossible. What changed? I stopped waiting to feel "ready enough" and just started: → Read kernel documentation → Found small issues I could fix → Submitted patches following LKML guidelines → Learned from code review feedback The patches themselves? Converting sprintf to sysfs_emit in the Greybus subsystem—small changes, but they improve kernel safety and follow best practices. Here's what I learned: - Start small (these were ~10 line changes) - Documentation matters (I also contributed watchdog driver docs) - Code review is a gift (Guenter Roeck's feedback taught me more than any tutorial) - Agriculture background ≠ barrier to kernel development To anyone thinking "I'm not experienced enough for open source": You are. Pick a project. Read the contribution guide. Submit something small. The kernel doesn't care about your degree. It cares about your code. #Linux #OpenSource #KernelDevelopment #SelfTaught #TechCareer #FromAgricultureToCode P.S. - If you're interested in contributing to the Linux Kernel, the staging tree (where Greybus lives) has excellent beginner-friendly issues. Start there.
To view or add a comment, sign in
-
Nobody told me Bash scripts could fail silently. You run the script. No errors. Looks fine. But half the logic just... didn't execute. I hit this on Day 2 and spent an hour debugging nothing. The fix was 3 words: set -euo pipefail Put it at the top of every script. It forces the script to die the moment something goes wrong instead of quietly skipping it and moving on. This is apparently standard in production. Nobody mentions it in tutorials. Built a Service Health Monitor today using loops, conditionals and traps. Checks services, retries on failure, logs everything with timestamps. Pushed to GitHub. Day 3 tomorrow awk and sed. hashtag#Linux hashtag#Bash hashtag#DevOps hashtag#LearningInPublic
To view or add a comment, sign in
-
Todays Bash lesson started to feel a lot more like real scripting. Not because the commands got harder. Because the scripts got smarter. I spent time learning arrays, functions, and filtering with 🄶🅁🄴🄿. And that changed the way i am starting to look at Bash. Arrays made it feel easier to handle groups of values. Functions made scripts feel cleaner and more reusable. And 🄶🅁🄴🄿 reminded me how powerful simple filtering becomes when you need the right output fast. Thats the part i am enjoying most right now: Bash is slowly moving from “just commands” to “structured logic I can actually build on.” Still early in the journey, but this was one of those sessions that made scripting feel more practical and much closer to real automation. What Bash concept made things click for you? . . . . . . . #Linux #LinuxAdministratior #Bash #ShellScripting #DevOps #Automation
To view or add a comment, sign in
-
-
📜Learning bash scripting taught me something I didn't expect: The gap between understanding something and using something is massive. You can read about variables and functions all day. But the real progress happens when you: → Apply the theory in isolation (write the function, test it, break it) → Connect the concepts (pass parameters, capture input, handle logic) → Build something end-to-end (a real script that solves a real problem) 👇Below is a script I put together that sorts all .txt files in a directory by size, smallest to largest, with input validation built in. Is it the most efficient script? Probably not. But that's the point, efficiency comes with practice, and you can't refine what you haven't built yet! Most people stop at step one and wonder why it's not sticking. Build something, even something small. That's where it lands. What's a skill you learned by just building with it? 💻 #Bash #Scripting #Linux #Tech #DevOps
To view or add a comment, sign in
-
-
Hey Everyone, I built a CLI tool and published it on GitHub — here's the problem that started it. I was running multiple Kubernetes services locally through Docker Desktop + Kind in WSL2. Every `kubectl port-forward` blocks a terminal, so I ended up with a tmux session full of forwards — and no clean way to see what was running, what had crashed, or what port belonged to what service. I was looking for some tool So I built portman. 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: → Runs kubectl, SSH, and socat port forwards in the background — no blocked terminals → Tracks every forward by name, PID, and port in a persistent state file → Shows a live status table (think htop, but for ports) → Built-in port reference: `portman info postgres` tells you port 5432, whether it's free, and if you have a forward on it → Kill by name, port number, or kill everything at once 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵: → Pure bash — one script, no dependencies beyond python3 (already on every Linux system) → JSON state file for persistence across sessions → GitHub Actions CI with shellcheck + smoke tests on every push → MIT licensed and open source This started as a personal frustration fix. It ended up teaching me a lot about bash process management, background daemons, PID tracking, POSIX signal handling, and what it actually takes to ship a tool other people can install and use. If you work with Kubernetes locally or manage lots of forwarded ports, give it a try: 👉 https://lnkd.in/dvFwKw8N Install in one line: curl -fsSL https://lnkd.in/dhd6pbJP -o /usr/local/bin/portman && chmod +x /usr/local/bin/portman Feedback and contributions welcome — issues and PRs are open. #OpenSource #Kubernetes #Linux #BashScripting #DevTools #WSL #CloudNative #SoftwareEngineering
To view or add a comment, sign in
-
-
The Ultimate SQL Cheat Sheet From Beginner → Intermediate → Advanced 🔹 Beginner: SELECT, WHERE, ORDER BY 🔹 Intermediate: JOIN, GROUP BY, HAVING 🔹 Advanced: CTEs, Window Functions, Optimization SQL can literally change your salary. #Git #Command #Github #Linux #Programming #Developer #Beginner #Advanced #JavaScript #Coding #DevOps #Workflow w3schools.com JavaScript Mastery GitHub
To view or add a comment, sign in
-
-
Revisiting Linux concepts is always valuable, and the BashBlaze – 7 Days of Bash Scripting Challenge is a great way to do just that. It introduces the fundamentals of Bash scripting while helping you build and refine your skills through a fast-paced, engaging approach. Whether you're just starting out or already have experience and want to strengthen your scripting abilities, this challenge offers daily exercises and practical examples to boost your understanding and confidence in Bash. Feel free to fork the repository and get started: https://lnkd.in/g5ccY6Fc
To view or add a comment, sign in
-
Rejection is a funny thing. It tells you what other people believe about you not what you're actually capable of. I've heard "you don't know this" more times than I can count recently. And each time, I had the same thought: I may not know it yet. But give me time and I'll figure it out. So I gave myself 24 hours. And I built Linux Sync. Linux Sync is a full peer-to-peer Linux system sync application written in Python, with a native GTK GUI, a background daemon, mDNS network auto-discovery, QR code pairing, and SSH-based transport. It syncs packages across any Linux distro (DNF, APT, Pacman, Zypper and more), Flatpak apps, your entire home directory, /etc system config, and GNOME desktop settings. A true 1-to-1 mirror between two machines. I had never built a GTK application before starting this project. I learned the framework, built the entire GUI, wired it to a sync engine, and shipped a working application in under 12 hours. People sometimes hear that and ask how. The honest answer is that I've spent years building what I think of as a spider web of knowledge. Linux internals. Networking protocols. SSH. Python. System architecture. UI patterns. Package management. None of it learned in isolation all of it connected. When I encounter something unfamiliar, I don't start from zero. I find the thread that connects it to something I already understand and pull. That's what it means to be a fast learner. Not that you know everything. That you know how to learn anything. Linux Sync is open source, fully functional, and built in a day. Not to impress anyone specifically just to remind myself, and maybe someone else who needs to hear it, that being told you can't do something is just the starting line. #Linux #Python #GTK #OpenSource #SoftwareEngineering #CareerDevelopment #NeverStopLearning https://lnkd.in/e7k-Nf6g
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Full notes and scripts: github.com/aroldanit/homelab