Day 11 — Systemd: Managing Services on Linux 🐧 Today I built a real custom systemd service from scratch — not just theory, actual files running on my EC2. What I built: A korelium-moniter.sh script that logs CPU and memory every 60 seconds to /var/log/korelium-moniter.log — then wrapped it as a systemd service so it runs automatically and restarts on failure. The mistake that stuck with me: Forgot chmod +x on my script → got status=203/EXEC error. Took me a while to debug. Now I'll never forget it. What the .service file does: → Restart=on-failure — if the script crashes, systemd brings it back → RestartSec=5 — waits 5 seconds before restarting → WantedBy=multi-user.target — starts automatically on boot Every time my monitor crashes, systemd revives it. That's production-grade thinking. Notes + files on GitHub 👇 🔗 https://lnkd.in/gWSfBVd8 Building every day. 🚀 #DevOps #Linux #Systemd #LearningInPublic #100DaysOfDevOps Timestamp: 3:42 PM IST, April 27, 2026
Building Custom Systemd Service on Linux with Systemd
More Relevant Posts
-
We built four browser-based simulators for HPC and cloud-native infrastructure. No cluster. No installation. No cloud account. Just open a browser and start typing real commands. → kubectl, Helm, RBAC, node management → Slurm job scheduling → Spack package management → Apptainer container workflows Each simulator runs a full Linux environment in the browser — real commands, real output, real error states to debug. Free to use. hpclearninghub.com is a good place to start. #Kubernetes #HPC #DevOps #CKA #CKAD #kubectl #Helm #cloudnative #Slurm #Spack #Apptainer #Linux
To view or add a comment, sign in
-
-
There is no such thing as a "container" in the Linux kernel. That word doesn't exist there. No system call, no API, no kernel object named "container." What the kernel does have are two features — and together, they create everything we call a container. Namespaces — isolation. A namespace gives a process its own isolated view of the system. Its own process tree. Its own network stack. Its own filesystem. Its own hostname. From inside, the container sees itself as PID 1 — the only process on the machine. From outside, the host sees it as just another process with a real PID like 4872. Same process. Two completely different views. That's the illusion namespaces create. That's how two containers can both listen on port 8080 without conflict. Each lives in its own network namespace. Different worlds, same host. cgroups — limits. Namespaces control what a process sees. cgroups control what it can consume. CPU, memory, disk I/O, number of processes — all enforced at the kernel level. Exceed the memory limit and the kernel kills the process immediately. That's the OOMKilled status you've probably seen in Kubernetes. When you run this: docker run --memory=512m --cpus=0.5 nginx Docker isn't doing magic. It's asking the Linux kernel to create a set of namespaces for that process and register it with cgroups to enforce those limits. That's it. No hypervisor. No virtual machine. No hardware emulation. Just two kernel features working together. This is also why containers are a Linux-native concept. On macOS and Windows, Docker quietly runs a lightweight Linux VM underneath to provide exactly these two things. Next time someone asks you what a container is — now you know the real answer. What surprised you most when you first learned how containers actually work under the hood?
To view or add a comment, sign in
-
-
Linux Series — Day 26: OOM Killer (Out of Memory) When system runs out of memory → OOM Killer steps in What it does: • detects memory exhaustion • kills process automatically Example: Killed process 1234 (java) Check logs: dmesg | grep -i kill Why it matters: • happens in production • can kill critical apps • indicates memory issue Fix: • increase RAM • fix memory leak • optimize application OOM Killer = last resort by system #Linux #OOM #Memory #Troubleshooting #DevOps #SRE #Cloud #SiteReliabilityEngineering #PlatformEngineering
To view or add a comment, sign in
-
-
Ever checked your Linux server and thought: 👉 “Why is 80%+ RAM used when no processes are heavy?” I recently ran into this on a Proxmox server with ~377GB RAM: ~307GB “used” No major processes consuming memory Minimal cache Almost no swap usage At first glance, it looks like a serious issue… but it wasn’t. 🔍 Root cause: ZFS ARC (Adaptive Replacement Cache) ZFS aggressively uses RAM to cache frequently and recently accessed data. In this case: ~188GB RAM was used by ARC Still ~70GB available System performance was stable 💡 Key takeaway: High RAM usage in Linux ≠ problem It often means your system is efficiently using available memory. ⚠️ When should you worry? Available memory drops critically low Swap usage increases significantly Applications start slowing down Otherwise, it’s just smart caching doing its job. 📌 Lesson: Don’t trust free -h blindly — always understand what is using memory before taking action. #Linux #ZFS #Proxmox #SysAdmin #DevOps #Infrastructure #Performance #OpenSource
To view or add a comment, sign in
-
-
Linux Series — Day 29: Signals (How Processes Are Controlled) Signals are used to control processes. Common signals: SIGTERM → graceful stop SIGKILL → force stop SIGHUP → reload Example: kill -15 PID # SIGTERM kill -9 PID # SIGKILL Difference: • SIGTERM → allows cleanup • SIGKILL → immediate stop Why it matters: • safe shutdown • process control • debugging Always try SIGTERM before SIGKILL #Linux #Signals #Processes #Troubleshooting #DevOps #SRE #Cloud #SiteReliabilityEngineering #PlatformEngineering
To view or add a comment, sign in
-
-
I've been using Linux daily for months. These 15 commands changed how I think about systems: (Saving you the 3am Google spiral) 𝐏𝐫𝐨𝐜𝐞𝐬𝐬 & 𝐬𝐲𝐬𝐭𝐞𝐦: htop → better version of top. See exactly what's eating your CPU ps aux | grep <name> → find any running process instantly kill -9 <PID> → force-kill a stuck process (use carefully) df -h → see disk usage in human-readable format free -m → check RAM usage 𝐍𝐞𝐭𝐰𝐨𝐫𝐤𝐢𝐧𝐠: netstat -tulnp → see what ports are open and what's listening curl -I <url> → check HTTP headers without a browser ping -c 4 <host> → basic connectivity test (the -c 4 stops it after 4 packets) ss -tulnp → faster, modern replacement for netstat 𝐅𝐢𝐥𝐞𝐬 & 𝐥𝐨𝐠𝐬: tail -f /var/log/syslog → watch logs in real time grep -r "error" /var/log/ → search all logs for errors recursively find / -name "*.conf" 2>/dev/null → find all config files, hide permission errors chmod 600 ~/.ssh/id_rsa → fix the "too open" SSH key error (we've all hit this) 𝐃𝐨𝐜𝐤𝐞𝐫 + 𝐜𝐨𝐧𝐭𝐚𝐢𝐧𝐞𝐫𝐬: docker logs -f <container> → stream container logs live docker exec -it <container> bash → get a shell inside a running container These aren't in any beginner tutorial. You learn them when something breaks. What command do you wish someone had told you earlier? Drop it below — I'll add it to my notes. #Linux #DevOps #CloudComputing #SysAdmin #AWS #Docker #Kubernetes
To view or add a comment, sign in
-
🚀 What is Load Average in Linux? 💡 Load average shows how busy your system is. It tells you how many processes are waiting for CPU time. --- 📊 You will usually see 3 numbers like this: `uptime` Output: 0.50, 1.20, 2.10 👉 These mean: * Last 1 minute * Last 5 minutes * Last 15 minutes --- 🧠 Real-life example: Imagine a tea stall ☕ 👨🍳 One person making tea (CPU) 👥 Customers waiting (processes) * 1 customer → normal load * 5 customers waiting → high load * Too many customers → system overloaded --- ⚡ How to understand numbers: If your CPU has 2 cores: * Load = 2 → Perfect utilization * Load < 2 → System is healthy * Load > 2 → System is overloaded --- 🔍 Commands to check: `uptime` `top` `w` --- 💡 Final thought: Load average is like a health meter of your system. Keep an eye on it, and you can detect performance issues early 🚀 #Linux #LinuxAdmin #DevOps #CloudComputing #SystemAdministration #LearningInPublic #ITInfrastructure
To view or add a comment, sign in
-
💾 Understanding LVM in Linux (PV, VG, LV) — with a simple example If you’ve ever struggled with disk management in Linux, learning LVM (Logical Volume Manager) is a game changer. 👉 Let’s break it down: 🔹 PV (Physical Volume) This is your actual disk or partition. Example: /dev/sda1, /dev/sdb1 🔹 VG (Volume Group) Think of this as a pool of storage created by combining multiple PVs. Example: Combine /dev/sda1 + /dev/sdb1 → vg_data 🔹 LV (Logical Volume) This is the usable partition created from the VG (like a flexible partition). Example: lv_data created inside vg_data 🧠 How it works together: PV → VG → LV → Filesystem 🚀 Real Example (Commands): 1️⃣ Create Physical Volumes pvcreate /dev/sda1 /dev/sdb1 2️⃣ Create Volume Group vgcreate vg_data /dev/sda1 /dev/sdb1 3️⃣ Create Logical Volume (10GB) lvcreate -L 10G -n lv_data vg_data 4️⃣ Format and Mount mkfs.ext4 /dev/vg_data/lv_data mount /dev/vg_data/lv_data /mnt/data 🔥 Why LVM is powerful: ✔ Resize storage without downtime ✔ Combine multiple disks easily ✔ Take snapshots ✔ Flexible and scalable 💡 Example Use Case: You start with 10GB, later need 20GB → just extend LV without touching data! #Linux #DevOps #LVM #SystemAdministration #CloudComputing #TechLearning #OpenSource #Backend #Infrastructure
To view or add a comment, sign in
-
A subscriber just wiped his Windows PC. With a single AI command. He asked Cursor to remove duplicate sections from his dissertation. Simple enough - except the model generated a shell command with a broken quote escape. It used a backslash the way it would on Linux, forgetting that on Windows \ is a path separator, not an escape character. The slash detached from the path. rmdir /s /q started deleting everything in sight - system folders included. No backups. He got lucky: the files weren't overwritten, so most were recoverable. But the afternoon was gone. So was the trust. The lesson isn't "don't use AI agents." It's: don't give agents unrestricted access to your main machine. Docker, a VM, a spare Raspberry Pi, a cloud instance - anything that isn't your primary OS. Add offsite backups that your agent literally cannot touch. That's it. (Some would also say: stop using bare Windows without WSL. They're not entirely wrong.) What's the worst thing an agent has done on your machine? #aiagents #llm #cursor #devtools #machinelearning #aiengineer #softwareengineer #agenticai #llmops
To view or add a comment, sign in
-
-
Linux 7.0 boasts improvements to its scheduler, Rust, and it's further embracing AI. ZDNET expert Steven Vaughan-Nichols details what's new in this release ➡️ https://lnkd.in/g438vPq9
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development