Apache was down on one server… while working perfectly on the others. Day 14 of #100DaysOfDevOps ✅ The task was to identify the faulty server, fix Apache, and ensure it was running on port 6300 across all app servers. From the jump host, I tested connectivity and quickly found that one server was refusing connections while the others were fine. The issue turned out to be a port conflict — another process was already using port 6300, preventing Apache from starting. Once the conflicting process was stopped, Apache started successfully and became accessible. Key takeaway: When a service fails to start, always check if the required port is already in use. Tools like ss and simple connectivity tests can help isolate the issue quickly. Day 14 complete. 86 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Troubleshooting #Networking #Apache #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
Shriyanshi Soni’s Post
More Relevant Posts
-
Load balancer was working… but still showing Nginx error page. Day 16 of #100DaysOfDevOps ✅ Today’s task was to configure Nginx as a load balancer to distribute traffic across three app servers. Everything looked correct Nginx was running, config was applied but requests were not reaching the backend. The issue? The upstream block was pointing to the wrong port. Apache on the backend servers was running on port 6400, but Nginx was forwarding traffic to the default port 80. Once the correct port was configured, everything worked as expected. Key takeaway: In load balancing, even a small mismatch between frontend and backend configuration can break the entire flow. Day 16 complete. 84 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Nginx #LoadBalancing #Networking #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
To view or add a comment, sign in
-
HTTPS setup done… but Nginx wouldn’t start. Day 15 of #100DaysOfDevOps ✅ Today’s task was to configure Nginx with SSL/TLS using pre-provided certificates. The setup looked straightforward place the certs in the right directories and configure the server to listen on port 443 with HTTP/2. But Nginx kept failing to start. The issue? A small syntax mistake in the config file. Running nginx -t quickly pointed out the error and saved a lot of debugging time. Key takeaway: A missing closing } in the http block causes Nginx to fail at startup with a configuration error. Always run nginx -t first it catches syntax errors before they take down the service. Day 15 done. 85 to go 🚀 GitHub 👇 https://lnkd.in/dk8Frue7 #DevOps #Linux #Nginx #SSL #100DaysOfDevOps #LearningInPublic #SRE #DevOpsEngineer
To view or add a comment, sign in
-
Day 8 of my DevOps roadmap — and today's topic hit different 🌐 [Writing this at 2:43 PM IST, Apr 21 2026] Networking Commands on Linux Here's what I built and learned today (as of 2:43 PM IST): → Learned ss -tulnp — shows every open port and which process owns it → Used ping -c 4 to test connectivity (0% packet loss, 1.2ms latency ✓) → Curled my own site korelium.org and saw the raw HTML come back in terminal → Built a network_audit.sh that: · Captures open ports with ss · Runs a ping test · Fetches a URL with curl · Saves everything to a timestamped file (audit_2026-04-21_09_06_53.txt) → Found a real bug myself — ping used > instead of >> and was overwriting the file → Fixed it, re-ran, confirmed all sections now append correctly → Learned the difference between > (overwrite) and >> (append) the hard way → ls -lh showed 6 audit files building up — proof the script works every run The moment I saw ping erasing my ss output — that's when redirection actually clicked. Not just running commands. Understanding what they do to your files. #Linux #DevOps #BashScripting #Networking #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
From "Localhost" to Automated CI/CD: Phase 1 Complete! 🚀 I just finished setting up a fully automated deployment pipeline on my local machine, and the journey from "it works on my machine" to "it deploys on every push" was full of great lessons! The Tech Stack: 💻 WSL2 (Ubuntu): My local Linux playground. ⚙️ Nginx: Serving my static site. 🐙 GitHub Actions: The brain of the automation. 🌐 Pinggy: Bridging the gap with a secure SSH tunnel to my local environment. The Challenge: Getting GitHub to "talk" to a private WSL instance behind a local firewall isn't as simple as it looks! I had to navigate SSH keys, TCP tunneling, and writing an idempotent shell script to handle permissions and git resets automatically. Key Lesson: DevOps isn't just about the tools; it's about the "glue" between them. Seeing that GitHub Action turn green and watching my browser update automatically is a massive win. Next stop: Phase 2 — Containerization with Docker! 🐳 Stay Tuned #DevOps #GitHubActions #WSL #CloudEngineering #LearningInPublic #Automation #WebDevelopment
To view or add a comment, sign in
-
-
Day 69/90 — Ansible Playbooks & Modules 🚀 Today I moved beyond ad-hoc commands and wrote my first real Ansible playbooks as part of #90DaysOfDevOps! Here's what I built and learned: ✅ Wrote my first playbook to install Nginx, start the service, and deploy a custom HTML page ✅ Proved idempotency — ran the same playbook twice, second run showed changed=0 ✅ Practiced 7 essential modules: apt, service, copy, file, command, shell, lineinfile ✅ Understood the difference between command and shell modules ✅ Learned handlers — tasks that only trigger when something actually changes ✅ Used --check --diff for safe dry runs before applying changes The moment that clicked for me: running the same playbook twice and watching Ansible make zero changes the second time. That's idempotency — and it's what makes infrastructure automation reliable at scale. Ansible doesn't just run commands. It enforces desired state. 🔥 #90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham #Ansible #DevOps #InfrastructureAsCode #Linux #Automation #LearningInPublic
To view or add a comment, sign in
-
🔐 Day 4 of #100DaysOfDevOps — Linux file permissions Today's task: a backup script existed on a production server but no one could run it. One missing permission bit was the culprit. Here's what I learned: Every Linux file has a 10-character permission string like -rw-r--r-- It's split into 3 groups: → Owner (the user who created it) → Group (a team sharing access) → Others (everyone else) Each group gets 3 bits: r (read=4), w (write=2), x (execute=1) The fix was one command: chmod a+x /tmp/xfusioncorp.sh or chmod 755 /tmp/xfusioncorp.sh a = all users | +x = add execute permission Before: -rw-r--r-- (no one can run it) After: -rwxr-xr-x (everyone can run it) Why does this matter in DevOps? → Automation scripts fail silently when permissions are wrong → CI/CD pipelines break if deploy scripts aren't executable → Every cloud server you ever manage will need this The numeric equivalent: chmod 755 Meaning: owner gets rwx (7), group gets r-x (5), others get r-x (5) "r = read, w = write, x = execute. Three bits. Three groups. That's all of Linux permissions." #DevOps #Linux #BashScripting #chmod #CloudEngineering KodeKloud
To view or add a comment, sign in
-
-
🚀 Automating Kubernetes Node Preparation using Ansible + AWX In many environments, preparing Kubernetes nodes (kubeadm) is still done manually — which leads to inconsistencies, configuration drift, and deployment delays. I worked on automating this process using Ansible (AWX-ready) to ensure a fully standardized and repeatable setup for RPM-based systems (RHEL / Rocky / CentOS). 🔧 What the automation covers: SELinux configuration (permissive mode) Swap disablement (runtime + persistent) Kernel modules (overlay, br_netfilter) Sysctl tuning (IP forwarding) Container runtime setup (containerd with systemd cgroups) Kubernetes components installation (kubeadm, kubelet, kubectl) ⚙️ Design approach: Idempotent playbook (state-driven, not condition-driven) Built for Execution Environments (EE) AWX-ready (inventory, credentials, job templates) Minimal dependencies (ansible.builtin + ansible.posix) 💡 Why this matters: Consistent node configuration across environments Faster provisioning for clusters Reduced human error Aligned with Kubernetes best practices 📦 Repository: https://lnkd.in/dgMbZdXc This is part of building a more platform-driven approach where infrastructure is treated as code and deployments become predictable. #Kubernetes #Ansible #AWX #DevOps #PlatformEngineering #Automation
To view or add a comment, sign in
-
Developer pushed broken code to production… 🚨 …and nothing went down 😳 ❌ No outage ❌ No panic ❌ No rollback scramble Because the deployment didn’t trust the build — it verified it first. 🛡️ A simple rule changed everything: “If it’s not healthy, it doesn’t go live.” Now every release runs in isolation, passes health checks, and only then gets promoted to production. Broken builds get blocked before they ever reach users. This turned CI/CD into a self-protecting system that quietly prevents failures instead of reacting to them. ⚙️ Full breakdown (Docker + GitHub Actions setup) here 👇 https://lnkd.in/dy_gw6T5 #DevOps #CI_CD #Docker #GitHubActions #CloudComputing #SRE #SiteReliabilityEngineering #PlatformEngineering #Microservices #SoftwareEngineering #Automation #SystemDesign #CloudNative #Linux #AWS #TechLeadership #DevOpsEngineering #ZeroDowntimeDeployment
To view or add a comment, sign in
-
-
Recently I have created a custom github docker actions which is basically a Giphy PR Commenter Actions that improves the contributor experience by automatically posting a "Thank You" GIF on every new Pull Request. The action works through a Docker container that executes a shell script. It uses jq and curl to fetch a random GIF from the Giphy API based on specific tags and then interacts with the GitHub API to post the GIF as a comment on the respective PR. Problem: Everything was running well in the github hosted runner but suddenly when I switched to self hosted runner it gave me the below error `ERROR: permission denied while trying to connect to the docker API at unix:///var/run/docker.sock` I checked the test.yaml file where I was using the actions but everything was ok. I was using the right tags as well. Then I went to my host machine and check if its a permission related issue or not. Then I found that when executing a Docker action it needs to communicate with the docker daemon which listens to /var/run/docker.sock this unix socket but by default that is owned by root. Where my runner was operating under user codegen which is my ubuntu user. Fix: So, now the fix is pretty simple , give the user `codegen` the permission to talk with unix:///var/run/docker.sock. 1. Ran the command `sudo usermod -aG docker codegen` - Here I just updated the group membership 2. id codegen Result: uid=1000(codegen) gid=1000(codegen) groups=1000(codegen),4(adm),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),30(dip),44(video),46(plugdev),100(users),107(netdev),989(docker) - Here we can see The user codegen has been successfully added to the docker group. 3. Then I just restarted the runner and created a PR and it was performing the workflow perfectly . Takeaway: If you are practicing devops you need to have the core linux fundamentals. Look , it was part of CI/CD but it forced me to recall my linux fundamentals. #Linux #devops #CI_CD #Automation #Docker #SoftwareEngineering #DevOpsEngineer #CloudNative
To view or add a comment, sign in
-
-
Speed up basic healthchecks on docker by 15x! Consider these two approaches: ❌ The Standard (Lazy) Way: "curl -f http://localhost:5000/health" ✅ The 15x faster High-Performance Bash Way: "exec 3<>/dev/tcp/localhost/5000 && builtin echo -e 'GET /health HTTP/1.1\\r\\nHost: localhost\\r\\nConnection: close\\r\\n\\r\\n' >&3 && { builtin read -r -d '' resp <&3 || true; } && [[ \"$$resp\" == *'\"status\": \"ok\"'* ]]" Why does the Bash approach deliver a massive 7x performance increase? 🚀 1️⃣ No Binary Overhead: curl is a separate, heavy binary that has to be loaded from disk into memory every single time the healthcheck runs. 2️⃣ Zero External Dependencies: curl brings baggage, requiring crypto and SSL libraries to be loaded just to perform a simple local HTTP ping. Bash doesn't need them for this. 3️⃣ Process Efficiency: curl forces a separate process to spawn. The CMD-SHELL is already running bash. By using /dev/tcp, you are leveraging lightning-fast, built-in shell features instead of spinning up new processes. By utilizing native shell capabilities, you cut out the middleman, eliminate overhead, and keep your containers running at peak efficiency. For something that you do as a user, it's not a big deal, but time=processing cycles=electricity=money and think about the total number of wasted cycles you do on healthchecks. #Docker #DevOps #Bash #Linux #PerformanceOptimization #SoftwareEngineering
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development