Speed up basic healthchecks on docker by 15x! Consider these two approaches: ❌ The Standard (Lazy) Way: "curl -f http://localhost:5000/health" ✅ The 15x faster High-Performance Bash Way: "exec 3<>/dev/tcp/localhost/5000 && builtin echo -e 'GET /health HTTP/1.1\\r\\nHost: localhost\\r\\nConnection: close\\r\\n\\r\\n' >&3 && { builtin read -r -d '' resp <&3 || true; } && [[ \"$$resp\" == *'\"status\": \"ok\"'* ]]" Why does the Bash approach deliver a massive 7x performance increase? 🚀 1️⃣ No Binary Overhead: curl is a separate, heavy binary that has to be loaded from disk into memory every single time the healthcheck runs. 2️⃣ Zero External Dependencies: curl brings baggage, requiring crypto and SSL libraries to be loaded just to perform a simple local HTTP ping. Bash doesn't need them for this. 3️⃣ Process Efficiency: curl forces a separate process to spawn. The CMD-SHELL is already running bash. By using /dev/tcp, you are leveraging lightning-fast, built-in shell features instead of spinning up new processes. By utilizing native shell capabilities, you cut out the middleman, eliminate overhead, and keep your containers running at peak efficiency. For something that you do as a user, it's not a big deal, but time=processing cycles=electricity=money and think about the total number of wasted cycles you do on healthchecks. #Docker #DevOps #Bash #Linux #PerformanceOptimization #SoftwareEngineering
Speed up Docker healthchecks with Bash
More Relevant Posts
-
Day 32 – Docker Volumes & Networking Today understood two critical real-world problems: 👉 Why containers lose data 👉 How containers communicate with each other 💥 What I did today: Ran a database container and created data Deleted the container… and data was gone Fixed it using named volumes → data persisted even after container removal Explored bind mounts → real-time file changes from host to container Learned Docker networking basics (bridge vs custom network) Created a custom network and saw containers communicate using names instead of IPs 🧠 Key Takeaways: Containers are ephemeral = use volumes for persistence Named volumes = managed by Docker Bind mounts = direct connection to host files Default bridge network has limitations Custom networks enable service discovery (name-based communication) #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Docker #DevOps #CloudComputing #Containers #BackendDevelopment #LearningInPublic #TechJourney #SoftwareEngineering #Linux #OpenSource #BuildInPublic
To view or add a comment, sign in
-
Ever happened to you that you tried to enter a #container and it failed? OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH Most people stop here. 👉 “No shell = no access” But that’s not actually true. Modern containers are often distroless / minimal, which means: 👉 no /bin/sh 👉 no /bin/bash 👉 no easy way to debug 💡 But here’s the thing: you can still enter the container. The manual way looks like this: 1. Find the container PID: docker inspect -f '{{.State.Pid}}' <container> # or crictl inspect <container-id> | grep pid 2. Inject a static binary (e.g. busybox): sudo apt install busybox-static sudo cp /bin/busybox /proc/<PID>/root/tmp/busybox sudo chmod +x /proc/<PID>/root/tmp/busybox 3. Enter with nsenter: sudo nsenter -t <PID> -m -u -i -n -p --root=/proc/<PID>/root /tmp/busybox sh Works… but yeah 😅 not fun. ----------- 🚀 So I built ctenter to automate all of this: sudo ctenter list sudo ctenter --pid <PID> Simple as that. It uses a lightweight agent ctenterd, but you can use any custom binary that provides a shell (like busybox): sudo ctenter --pid <PID> --agent-path /path/to/busybox --exec sh --interactive ✨ Features: 🔍 Cross-runtime discovery: Docker, containerd, CRI-O 🐚 Shell access without a shell ⚡ One-shot command execution 🧩 Custom agent support, bring your own binary 🪶 Lightweight injection via /proc/<pid>/root 🔐 No container modification required 🧠 Namespace-aware execution using nsenter 👉 Try it out: https://lnkd.in/er3dvrj3 #containers #docker #kubernetes #linux #devops
To view or add a comment, sign in
-
Hey Everyone, I built a CLI tool and published it on GitHub — here's the problem that started it. I was running multiple Kubernetes services locally through Docker Desktop + Kind in WSL2. Every `kubectl port-forward` blocks a terminal, so I ended up with a tmux session full of forwards — and no clean way to see what was running, what had crashed, or what port belonged to what service. I was looking for some tool So I built portman. 𝗪𝗵𝗮𝘁 𝗶𝘁 𝗱𝗼𝗲𝘀: → Runs kubectl, SSH, and socat port forwards in the background — no blocked terminals → Tracks every forward by name, PID, and port in a persistent state file → Shows a live status table (think htop, but for ports) → Built-in port reference: `portman info postgres` tells you port 5432, whether it's free, and if you have a forward on it → Kill by name, port number, or kill everything at once 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵: → Pure bash — one script, no dependencies beyond python3 (already on every Linux system) → JSON state file for persistence across sessions → GitHub Actions CI with shellcheck + smoke tests on every push → MIT licensed and open source This started as a personal frustration fix. It ended up teaching me a lot about bash process management, background daemons, PID tracking, POSIX signal handling, and what it actually takes to ship a tool other people can install and use. If you work with Kubernetes locally or manage lots of forwarded ports, give it a try: 👉 https://lnkd.in/dvFwKw8N Install in one line: curl -fsSL https://lnkd.in/dhd6pbJP -o /usr/local/bin/portman && chmod +x /usr/local/bin/portman Feedback and contributions welcome — issues and PRs are open. #OpenSource #Kubernetes #Linux #BashScripting #DevTools #WSL #CloudNative #SoftwareEngineering
To view or add a comment, sign in
-
-
From "Localhost" to Automated CI/CD: Phase 1 Complete! 🚀 I just finished setting up a fully automated deployment pipeline on my local machine, and the journey from "it works on my machine" to "it deploys on every push" was full of great lessons! The Tech Stack: 💻 WSL2 (Ubuntu): My local Linux playground. ⚙️ Nginx: Serving my static site. 🐙 GitHub Actions: The brain of the automation. 🌐 Pinggy: Bridging the gap with a secure SSH tunnel to my local environment. The Challenge: Getting GitHub to "talk" to a private WSL instance behind a local firewall isn't as simple as it looks! I had to navigate SSH keys, TCP tunneling, and writing an idempotent shell script to handle permissions and git resets automatically. Key Lesson: DevOps isn't just about the tools; it's about the "glue" between them. Seeing that GitHub Action turn green and watching my browser update automatically is a massive win. Next stop: Phase 2 — Containerization with Docker! 🐳 Stay Tuned #DevOps #GitHubActions #WSL #CloudEngineering #LearningInPublic #Automation #WebDevelopment
To view or add a comment, sign in
-
-
Day 8 of my DevOps roadmap — and today's topic hit different 🌐 [Writing this at 2:43 PM IST, Apr 21 2026] Networking Commands on Linux Here's what I built and learned today (as of 2:43 PM IST): → Learned ss -tulnp — shows every open port and which process owns it → Used ping -c 4 to test connectivity (0% packet loss, 1.2ms latency ✓) → Curled my own site korelium.org and saw the raw HTML come back in terminal → Built a network_audit.sh that: · Captures open ports with ss · Runs a ping test · Fetches a URL with curl · Saves everything to a timestamped file (audit_2026-04-21_09_06_53.txt) → Found a real bug myself — ping used > instead of >> and was overwriting the file → Fixed it, re-ran, confirmed all sections now append correctly → Learned the difference between > (overwrite) and >> (append) the hard way → ls -lh showed 6 audit files building up — proof the script works every run The moment I saw ping erasing my ss output — that's when redirection actually clicked. Not just running commands. Understanding what they do to your files. #Linux #DevOps #BashScripting #Networking #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
Recently I have created a custom github docker actions which is basically a Giphy PR Commenter Actions that improves the contributor experience by automatically posting a "Thank You" GIF on every new Pull Request. The action works through a Docker container that executes a shell script. It uses jq and curl to fetch a random GIF from the Giphy API based on specific tags and then interacts with the GitHub API to post the GIF as a comment on the respective PR. Problem: Everything was running well in the github hosted runner but suddenly when I switched to self hosted runner it gave me the below error `ERROR: permission denied while trying to connect to the docker API at unix:///var/run/docker.sock` I checked the test.yaml file where I was using the actions but everything was ok. I was using the right tags as well. Then I went to my host machine and check if its a permission related issue or not. Then I found that when executing a Docker action it needs to communicate with the docker daemon which listens to /var/run/docker.sock this unix socket but by default that is owned by root. Where my runner was operating under user codegen which is my ubuntu user. Fix: So, now the fix is pretty simple , give the user `codegen` the permission to talk with unix:///var/run/docker.sock. 1. Ran the command `sudo usermod -aG docker codegen` - Here I just updated the group membership 2. id codegen Result: uid=1000(codegen) gid=1000(codegen) groups=1000(codegen),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),100(users),107(netdev),989(docker) - Here we can see The user codegen has been successfully added to the docker group. 3. Then I just restarted the runner and created a PR and it was performing the workflow perfectly . Takeaway: If you are practicing devops you need to have the core linux fundamentals. Look , it was part of CI/CD but it forced me to recall my linux fundamentals. #Linux #devops #CI_CD #Automation #Docker #SoftwareEngineering #DevOpsEngineer #CloudNative
To view or add a comment, sign in
-
-
Day 4 of 90 — My Bash scripts can now think, repeat, and run themselves. Today I learned the 3 things that turn basic scripts into real automation: 1️⃣ Variables — store values once, use everywhere 2️⃣ Conditions — if disk > 80% then alert, else OK 3️⃣ Loops — check 10 servers in 3 lines of code And then I learned Cron Jobs — Linux's alarm clock. Now my server health script runs automatically every 5 minutes. Zero manual work. I wrote smart-monitor.sh from scratch today: ✅ Checks disk usage against a threshold ✅ Verifies nginx is running — alerts if not ✅ Loops through services and reports status ✅ Saves output to a dated log file automatically This is what DevOps automation actually looks like. Not clicking buttons — writing scripts that work while you sleep. Code on GitHub 🔗 https://lnkd.in/gmka7Vk7 Beginner tip: Cron syntax looks scary at first. Remember: MIN HOUR DAY MONTH WEEKDAY */5 * * * * = every 5 minutes. That's it. Day 5 tomorrow: Git deep dive 🌿 #DevOps #Linux #Bash #Automation #CronJobs #90DayChallenge #LearningInPublic
To view or add a comment, sign in
-
-
My First Step into DevOps: Building a Containerized DNS Core The Goal: I recently moved my home DNS filtering (Pi-hole) onto a Raspberry Pi 3B. While the standard approach is to use a simple one-line installation script, I chose to deploy it using Docker Compose. I wanted to move away from "point-and-click" setups and actually learn how to manage infrastructure as code. --- The Learning Curve: Getting Pi-hole to run correctly in a container was a massive learning experience. It forced me to move past the surface level and solve real-world engineering problems. - Container Networking: This was my biggest hurdle. In the default Docker Bridge mode, Pi-hole couldn’t "see" individual device IPs. I had to pivot to Host Networking for transparency, which meant learning to manually resolve Port 53 conflicts with systemd-resolved using ss and lsof. - The "Ephemeral" Logic: I learned the hard way how Docker handles state. I spent hours troubleshooting why my .env changes weren't reflecting, only to discover that once a persistent volume is created, it takes precedence during recreation. It shifted my mindset from "editing files" to "managing a service lifecycle." - Linux Fundamentals: The project required more than just Docker. I had to master GPG repository signing, understand how shell expansion works in docker.sources, and use chmod a=r to harden my security—skills that are often skipped in basic tutorials. - Resource Management: Running on a Pi 3B meant being intentional with storage. I learned to use docker system prune and image auditing to keep "ghost" data from filling up my SD card. --- When it clicked: The real win wasn't getting the dashboard to turn green. It was the moment I finally untangled a subnet origin conflict that was dropping my DNS traffic. That struggle taught me more about traffic flow and Docker logic. I’ve documented every step of this journey from the initial ssh-keygen to the final healthcheck logic in my GitHub repo: https://lnkd.in/eC2diMju #DevOps #Docker #Pihole #HomeLab #LinuxAdmin #Networking #LearnInPublic
To view or add a comment, sign in
-
Apache was down on one server… while working perfectly on the others. Day 14 of #100DaysOfDevOps ✅ The task was to identify the faulty server, fix Apache, and ensure it was running on port 6300 across all app servers. From the jump host, I tested connectivity and quickly found that one server was refusing connections while the others were fine. The issue turned out to be a port conflict — another process was already using port 6300, preventing Apache from starting. Once the conflicting process was stopped, Apache started successfully and became accessible. Key takeaway: When a service fails to start, always check if the required port is already in use. Tools like ss and simple connectivity tests can help isolate the issue quickly. Day 14 complete. 86 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Troubleshooting #Networking #Apache #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
To view or add a comment, sign in
-
🔐 Day 4 of #100DaysOfDevOps — Linux file permissions Today's task: a backup script existed on a production server but no one could run it. One missing permission bit was the culprit. Here's what I learned: Every Linux file has a 10-character permission string like -rw-r--r-- It's split into 3 groups: → Owner (the user who created it) → Group (a team sharing access) → Others (everyone else) Each group gets 3 bits: r (read=4), w (write=2), x (execute=1) The fix was one command: chmod a+x /tmp/xfusioncorp.sh or chmod 755 /tmp/xfusioncorp.sh a = all users | +x = add execute permission Before: -rw-r--r-- (no one can run it) After: -rwxr-xr-x (everyone can run it) Why does this matter in DevOps? → Automation scripts fail silently when permissions are wrong → CI/CD pipelines break if deploy scripts aren't executable → Every cloud server you ever manage will need this The numeric equivalent: chmod 755 Meaning: owner gets rwx (7), group gets r-x (5), others get r-x (5) "r = read, w = write, x = execute. Three bits. Three groups. That's all of Linux permissions." #DevOps #Linux #BashScripting #chmod #CloudEngineering KodeKloud
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development