🖥️ Day 10 of #100DaysOfDevOps — Writing my first website backup automation script Today's task: write a bash script that automatically zips a website's media directory and copies it to a remote storage server — with zero password prompts and zero manual steps. The complete script: #!/bin/bash # Zip the media directory zip -r /backup/xfusioncorp_media.zip /var/www/html/media # Copy to storage server (passwordless SSH) scp /backup/xfusioncorp_media.zip natasha@ststor01:/backup/ Two lines. But here's everything that had to be in place for those two lines to work: → #!/bin/bash (shebang) Tells the OS which interpreter to run this file with. Without it, nothing works. → zip -r The -r flag means recursive — it includes every file and subfolder inside the media directory. Without -r your backup would be empty. → scp over passwordless SSH SCP copies files between servers over SSH. No password prompt because banner's SSH keys are already in natasha's authorized_keys — set up on Day 7. → No sudo inside the script Automation scripts must never use sudo — they break in cron jobs and CI/CD pipelines. Instead I fixed directory ownership before running the script: sudo chown banner:banner /backup sudo chown banner:banner /scripts → chmod +x to make it executable Remember Day 4? A script without execute permission is just a text file. Everything from the past 10 days came together in this one task: Day 4 → chmod to make the script executable Day 7 → SSH keys so scp works silently Day 6 → cron will schedule this script #DevOps #BashScripting #Linux #Automation #Backup #CloudEngineering KodeKloud
DevOps Automation: Website Backup Script with SSH
More Relevant Posts
-
The script finished. No errors. Exit code 0. The backup it was supposed to create didn't exist. 🔥 Happy Bash Wednesday! I had a backup script that ran every night via cron. It ran for months without a single alert. Cron reported success. The script exited clean. No error output in the logs. Then we needed to restore from backup. The backup file wasn't there. It hadn't been created in weeks. The script had a tar command that was failing silently. The target directory had been moved during a maintenance window, and tar was writing to a path that no longer existed. tar printed an error to stderr, but the script wasn't capturing stderr. And because nobody checked the exit code after the tar command, the script just continued to the next line, finished, and exited 0. The script didn't fail. It reported success. The backup didn't exist. And nobody knew for weeks because the exit code said everything was fine. One line would have caught it: tar -czf /backup/data.tar.gz /data if [ $? -ne 0 ]; then echo "BACKUP FAILED" | mail -s "Backup Alert" ops@company.com; exit 1; fi $? holds the exit code of the last command. 0 means success. Anything else means something went wrong. If you don't check it, you're trusting that every command in your script works perfectly every time. It won't. The Danger Zone (When Scripts Lie About Success): 🔹 A script that doesn't check $? after critical commands will report success even when the operation failed. Cron sees exit 0 and moves on. You see nothing until it's too late. 🔹 $? only holds the exit code of the most recent command. If you run another command before checking it (even an echo), the original exit code is gone. 🔹 Many commands fail silently. They print to stderr (which might not be captured) and set a non-zero exit code (which nobody checks). The exit code is the only reliable signal. ❓ Question of the Day: Which variable contains the exit status of the most recently executed command? Ⓐ $? Ⓑ $! Ⓒ $# Ⓓ $0 👇 Answer and breakdown in the comments! #Bash #Linux #DevOps #DamnitRay #QOTD
To view or add a comment, sign in
-
-
⏰ Day 6 of #100DaysOfDevOps — Cron jobs Today's task: install cronie and deploy a scheduled cron job across all 3 app servers in the Stratos Datacenter. But first — what is cron? Cron is Linux's built-in task scheduler. A background daemon (crond) wakes up every minute, checks if any scheduled jobs are due, and runs them. It's the engine behind almost every automated task in a Linux environment — backups, log rotation, health checks, deployments. A cron expression has 5 time fields: ┌───── minute (0-59) │ ┌───── hour (0-23) │ │ ┌───── day of month (1-31) │ │ │ ┌───── month (1-12) │ │ │ │ ┌───── day of week (0-7) */5 * * * * echo hello > /tmp/cron_text → Runs every 5 minutes, every hour, every day. What I did on each server: 1. sudo yum install -y cronie 2. sudo systemctl enable --now crond 3. sudo crontab -u root -e (added the job) 4. sudo crontab -u root -l (verified it saved) Key distinction I learned: systemctl start → starts the service NOW only systemctl enable → makes it survive reboots systemctl enable --now → does both in one command "Cron is the heartbeat of server automation. If something happens on a schedule — cron is doing it." #DevOps #Linux #Cron #Automation #SysAdmin #KodeKloud
To view or add a comment, sign in
-
-
Ever happened to you that you tried to enter a #container and it failed? OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH Most people stop here. 👉 “No shell = no access” But that’s not actually true. Modern containers are often distroless / minimal, which means: 👉 no /bin/sh 👉 no /bin/bash 👉 no easy way to debug 💡 But here’s the thing: you can still enter the container. The manual way looks like this: 1. Find the container PID: docker inspect -f '{{.State.Pid}}' <container> # or crictl inspect <container-id> | grep pid 2. Inject a static binary (e.g. busybox): sudo apt install busybox-static sudo cp /bin/busybox /proc/<PID>/root/tmp/busybox sudo chmod +x /proc/<PID>/root/tmp/busybox 3. Enter with nsenter: sudo nsenter -t <PID> -m -u -i -n -p --root=/proc/<PID>/root /tmp/busybox sh Works… but yeah 😅 not fun. ----------- 🚀 So I built ctenter to automate all of this: sudo ctenter list sudo ctenter --pid <PID> Simple as that. It uses a lightweight agent ctenterd, but you can use any custom binary that provides a shell (like busybox): sudo ctenter --pid <PID> --agent-path /path/to/busybox --exec sh --interactive ✨ Features: 🔍 Cross-runtime discovery: Docker, containerd, CRI-O 🐚 Shell access without a shell ⚡ One-shot command execution 🧩 Custom agent support, bring your own binary 🪶 Lightweight injection via /proc/<pid>/root 🔐 No container modification required 🧠 Namespace-aware execution using nsenter 👉 Try it out: https://lnkd.in/er3dvrj3 #containers #docker #kubernetes #linux #devops
To view or add a comment, sign in
-
Just dropped: PowerHuntShares-dotnet - a .NET port of NetSPI's PowerHuntShares SMB share auditing tool. The original PowerShell module is excellent for inventorying and reporting on excessive AD share privileges. But in constrained environments - think locked-down PS execution policies, aggressive AMSI coverage, or endpoints where a .psm1 import raises immediate flags - having a compiled binary matters. So we built one. 🛠️ Core functionality preserved: → AD computer discovery via LDAP → TCP 445 reachability filtering → SMB share ACL enumeration → Excessive privilege detection (Everyone, Authenticated Users, BUILTIN\Users, Domain Users, Domain Computers) → High-risk share identification → HTML + CSV report generation This was a personal project to get hands on with some vibe coding with Claude code, and it's API interface. Security practitioners who aren't full-time developers can build meaningful tooling this way (Just make sure to actually review the code it's generating). Shoutout to Scott Sutherland (@_nullbind) for the original PowerHuntShares. Go star that repo too. (https://lnkd.in/e5a6UzRm) 👉 https://lnkd.in/eibDz7zq Still early. Contributions welcome. #RedTeaming #ActiveDirectory #SMBSecurity #DotNet #OffensiveSecurity #OpenSource #PenTest #VibeCoding
To view or add a comment, sign in
-
Pulling the plug on a running server is how you corrupt database writes, drop active users mid-session, and lose in-flight messages. The right way to shut down a server is to let it finish what it started. This is called a graceful shutdown. Let's have a detailed look at graceful shutdown. The three signals worth knowing: SIGTERM - sent by process managers and container runtimes; your application can catch it, finish work, and exit cleanly SIGINT - triggered by Ctrl+C in a terminal; behaviorally the same as SIGTERM for most applications SIGKILL - cannot be caught or ignored; the kernel terminates the process immediately with no cleanup window When a process receives a SIGTERM (the polite shutdown signal from orchestrators like Kubernetes or systemd), it should stop accepting new connections immediately and focus on finishing active requests. This transition is known as connection draining. The length of time you allow for this process is called the drain timeout, and you should set it based on your service's typical request duration and your specific performance goals. Once active requests are done, cleanup begins. The order here matters: tear down resources in reverse initialisation order. If your server opened a database connection pool and then a message queue consumer, shut down the consumer first, then drain and close the pool. Cleaning up in the wrong order means a component tries to use a dependency that no longer exists. SIGKILL is the last resort, equivalent to pulling the plug. If your service is still running 30 seconds after SIGTERM, the orchestrator sends SIGKILL. The best way to avoid that is to make your SIGTERM handler fast and correct. Graceful shutdown is one of those things that feels like an edge case until you're debugging a production incident at 2 a.m., wondering why half your users got a 500 during a routine deploy. #backend #systemdesign #linux #softwaredevelopment #devops
To view or add a comment, sign in
-
-
Every MCP server you build needs API keys. Most people hardcode them or dump them in .env files. agent-secrets-cli gives your agent a local secrets store it can search by meaning - works with any MCP host. https://lnkd.in/dMpAKDHW
To view or add a comment, sign in
-
Keeping machines up to date shouldn't mean SSH-ing into servers every weekend or shipping untested updates to production. I wrote a guide on building a fully automated upgrade pipeline with Forgejo Actions: → A standalone shell script builds every host before and after a flake update and generates per-host package diffs via nvd → CI runs it on a schedule and opens a pull request with the diff report → You review and merge — hosts self-upgrade from main, no manual intervention The script runs locally too, so you can preview exactly what changes before anything hits CI. Key design decisions: → Hosts never modify flake.lock — they always build from the reviewed lock file on main → No hardcoded URLs in the workflow — Forgejo context variables handle it → Unchanged hosts are omitted from the PR for a clean review https://lnkd.in/e9aRsAup #NixOS #Nix #DevOps #InfrastructureAsCode #CICD #Forgejo #Linux
To view or add a comment, sign in
-
🚀 Understanding Linux File Systems – The Backbone of Every Server. ================================================= Ever wondered how Linux organizes everything so efficiently? 🤔 Here’s a quick breakdown of the core directories every DevOps engineer, system admin, or developer should know. 📂 /bin – Essential command binaries ⚙️ /boot – Boot loader files 🔌 /dev – Device files 🛠️ /etc – Configuration files 🏠 /home – User directories 📚 /lib – Shared libraries 💿 /media & /mnt – Mounted storage 📦 /opt – Optional/add-on software 💡 On the system side: ------------------------------------------------------------------------------------- 📊 /proc & /sys – Kernel & system info 👑 /root – Root user home ⚡ /run – Runtime data 🔧 /sbin – System binaries 🌐 /srv – Service data 🗂️ /tmp – Temporary files 🧑💻 /usr – User applications 📈 /var – Logs & variable data 🔥 Why this matters? ================================================= Understanding the Linux file system helps you: ✅ Troubleshoot faster ✅ Manage servers efficiently ✅ Improve security & performance
To view or add a comment, sign in
-
-
Batch Audit: Monitoring Remote Services Across the Domain Checking the status of critical services shouldn't involve logging into fifty different servers. Whether you're tracking down a failed backup agent or verifying that your new security software is running everywhere, automation is the key to maintaining your sanity. 🚀 I've shared a reliable VBScript on the blog that pings your server list, queries WMI for service details, and consolidates everything—start mode, current state, and service accounts—into one CSV file. Why this script belongs in your toolkit: ✅ Automatic Availability Check: It pings the server first so the script doesn't hang on offline hosts. ✅ Comprehensive Data: Pulls Display Names, Start Mode (Auto/Manual), State (Running/Stopped), and the Service Account. ✅ Legacy & Modern Friendly: Works on older Windows Server versions where PowerShell might not be fully configured. ✅ Audit Ready: Perfect for compliance checks to see which services are running as "Domain Admin" or "LocalSystem." Stop manual checks and start batch auditing your infrastructure in seconds. 👇 Get the full script here: https://lnkd.in/eurqHGRp #SysAdmin #WindowsServer #ITAutomation #VBScript #WMI #ServerManagement #ITAudit #LazyAdmin #TechTips #Infrastructure
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development