🚀 Challenge #100DaysOfDevOps by KodeKloud | Day 12 Today’s challenge was all about troubleshooting Apache service issues in a real-world scenario — and it turned out to be a great learning experience. 🔍 What I worked on: I was given a situation where the Apache service was not reachable on port 5001. Instead of jumping to conclusions, I followed a structured debugging approach. 🛠️ Steps I took: I checked the Apache service status and found it was failing to start. I analyzed the logs and discovered a port conflict issue. Using ss, I identified that sendmail was already using port 5001. I stopped and disabled the sendmail service to free the port. After that, I successfully started Apache and confirmed it was running. Next, I verified that Apache was listening on all interfaces. While testing from the jump host, I faced a network issue (No route to host). I investigated iptables and found restrictive rules blocking traffic. Finally, I allowed port 5001 in iptables and validated the fix using curl. 💡 Key Learnings: Always read error logs carefully — they often point directly to the issue. Port conflicts are a common but critical problem in server setups. Troubleshooting is not just about services, but also networking and firewall rules. A step-by-step approach saves time and avoids confusion. ✅ Outcome: Apache is now up and accessible on port 5001 from the jump host. Every day in this challenge is making me more confident in handling real DevOps scenarios. Looking forward to the next one! 🔥 If you're starting your DevOps journey, I highly recommend KodeKloud for hands-on labs 👇 https://lnkd.in/deg5ZDcV #DevOps #Linux #Apache #Troubleshooting #Networking #LearningJourney #KodeKloud
Troubleshooting Apache Service Issues with KodeKloud
More Relevant Posts
-
🚀 Challenge #100DaysOfDevOps by KodeKloud | Day 14 🔍 Day 14: Linux Process Troubleshooting Today’s lab was all about identifying and fixing a real-world service issue in a production-like environment. 🧠 What I worked on: I investigated an Apache service outage reported on one of the application servers in Stratos DC. The goal was to ensure Apache was running on all app servers and correctly configured on port 8085. ⚙️ Steps I followed: I connected to each application server using SSH from the jump host. I checked Apache service status using systemctl status httpd. I identified the faulty server where Apache was failing to start. I analyzed the error logs and found a port conflict issue. Using ss -tulnp, I discovered that another process (sendmail) was already using port 8085. I stopped and disabled the conflicting service (sendmail). I verified and updated Apache configuration to use port **8085`. I restarted Apache and confirmed it was running successfully. I repeated verification across all servers to ensure consistency. ❗ Issue I faced: Apache service was failing due to “Address already in use”, caused by another service occupying the required port. ✅ How I resolved it: I freed the port by stopping the conflicting process and ensured Apache was properly configured and running on port 8085 across all servers. 📌 Key Learnings: Always check for port conflicts when a service fails to start Use tools like ss or netstat to identify running processes Logs and error messages are the fastest way to diagnose issues Consistency across servers is critical in distributed environments 💡 This task gave me a strong understanding of Linux process management and real-time troubleshooting in DevOps environments. If you want to start your cloud and DevOps journey with KodeKloud (highly recommended if you want real hands-on learning): https://lnkd.in/deg5ZDcV #DevOps #Linux #Troubleshooting #Apache #KodeKloud #100DaysOfDevOps #LearningJourney
To view or add a comment, sign in
-
-
It is not everyday companies choose to open source something that they built with a lot of efforts but it gives us immense pleasure to give back to the open source community. VaSTLogs is a different kind of tool which is not limited to logging platform but it focuses on every need of a SRE and DevOps Engineer. Prakash K. Lakhara did any amazing job building this
🚀 I built an open-source observability platform from scratch — and I'm finally sharing it. Introducing VaST-Logs — a modern, zero-config observability platform for Linux infrastructure, built with Go, React, ClickHouse, and InfluxDB. As a DevOps engineer, I got tired of stitching together Grafana + Loki + Prometheus + Alertmanager just to answer: "What's wrong with my server right now?" So I built something that does it all, out of the box. 🔍 What VaST-Logs does: • Real-time metrics (CPU, Memory, Disk, Network) at 1s resolution • Smart log discovery — auto-detects Nginx, Apache, Caddy, MySQL, PostgreSQL, Redis, MongoDB & more • Docker container log streaming with auto-discovery • Multi-host fleet dashboard from a single pane of glass • Intelligent alerting with composite monitors (AND/OR logic), ML-based anomaly detection, and browser push notifications • Full incident lifecycle management with auto-creation from alerts • APM & distributed tracing with OTLP support • Cloud SIEM with MITRE ATT&CK mapping • Audit trail for all admin actions — 90-day retention in ClickHouse • PWA support — works on mobile too 🛡️ Security-first: JWT auth, bcrypt password hashing, API key agent auth, RBAC, HTTPS/TLS with self-signed cert support, strict CORS — secure by default. ⚙️ Built over 44 development phases, 305+ commits — from a simple log viewer to a full observability suite. This is 100% self-hosted. No SaaS. No per-seat pricing. Your infra, your data. 🚧 Still actively in development — new features are being added regularly. If you have ideas, suggestions, reporting bugs or want to contribute, the door is wide open! 🔗 GitHub: https://lnkd.in/d_6QyxZZ If you're a DevOps engineer or SRE who's ever wanted a lightweight alternative to the ELK/LGTM stack — give it a try and drop a ⭐ if it helps! Screenshots: https://lnkd.in/dsfUxyvS #DevOps #Observability #OpenSource #Go #Linux #Monitoring #SRE #Infrastructure
To view or add a comment, sign in
-
Jenkins vs GitHub Actions in 2026 ⚔️ Everyone compares this the wrong way. ❌ “Jenkins has 1,800+ plugins” ❌ “GitHub Actions has 20,000+ marketplace actions” That’s NOT what actually matters. Here’s what matters when your team is deciding 👇 ⚙️ SETUP TIME Jenkins: • Provision server • Install Java • Configure master + agents • Manage plugin compatibility 👉 Minimum half a day (realistically more) GitHub Actions: • Create a .yml file 👉 You’re live in ~15 minutes 💸 REAL COST Jenkins: • $200–500/month infra (AWS/GCP) for a ~10 dev team • + 2–4 hours/month maintenance GitHub Actions: • 2,000 free minutes/month • Most small teams stay within free tier 👉 Hidden cost of Jenkins = engineering time 🔐 SECURITY (THIS IS WHERE IT ENDS) GitHub Actions gives you out-of-the-box: • OIDC (keyless cloud authentication) • Encrypted secrets with environment scoping • Job-level token permissions Jenkins can do this… 👉 but it takes months to configure correctly 🏗️ WHEN JENKINS STILL WINS • Air-gapped / offline environments • Heavy investment in Groovy Shared Libraries • Non-GitHub SCMs (GitLab, Bitbucket) • Enterprise tools with Jenkins-only plugins 👉 In these cases — STAY on Jenkins 🚀 FOR EVERYONE ELSE The break-even is simple: 👉 1–3 months of saved maintenance time = migration cost recovered 📊 I wrote a full deep-dive with: • 12-row comparison table • Cost breakdown • Migration strategy Read here: https://lnkd.in/gsn8Uzjt Curious — what’s still keeping your team on Jenkins in 2026? #DevOps #CI #CD #Jenkins #GitHubActions #DevSecOps
To view or add a comment, sign in
-
From a mess of Bash Scripts to full automation Working in systems, when you start scaling up, manually configuring or using patchy bash scripts for on-premise HA clusters is honestly exhausting. There were times when running a deploy command left me holding my breath, worrying the nodes might drift out of sync. Recently, during a review, I decided to scrap all the old scripts and switch entirely to Ansible for IaC. Not to chase a trend, but because it solves exactly three things I need: 1. No messy installations (Agentless) When dealing with physical servers, resources must be strictly optimized. I have a strong aversion to installing extra background agents on machines. With Ansible, all you need is SSH. You push commands from one control machine down to hundreds of nodes at once. Once it's done, it's done. It doesn't clutter the system or waste a single megabyte of RAM on the target servers. 2. Run repeatedly without fear of errors (Idempotency) Running a bash script over and over easily throws errors. Ansible is different; it works by letting you declare the outcome you want. For example, if you tell it to start Nginx, it checks—if Nginx is already running, it ignores the task; if not, it turns it on. This characteristic gives me the confidence to schedule automated scripts every day, ensuring the nodes are always in sync without worrying about crashing active services. 3. Everything you need is built-in (Batteries-Included) A real-world system is more than just a Linux OS. It involves HAProxy, Firewalls, Databases, and all sorts of things. The beauty of Ansible is its thousands of built-in modules that let you hook directly into those services. Everything is consolidated into a single workflow, so the team doesn't have to struggle with writing custom API calls from scratch. Looking back, this transition didn't just save time; it brought peace of mind. Tomorrow, if we need to move the server cluster to a new infrastructure, all it takes is typing one ansible-playbook command and everything automatically rebuilds exactly as it was. In engineering, sometimes the best technology is simply the one that helps us sleep better at night! What tool is your team using to automate your systems? Let's share. #SystemArchitecture #DevOps #Ansible #ITAutomation #OnPremise #InfrastructureAsCode #TechJourney
To view or add a comment, sign in
-
Achieving Zero-Downtime Deployments with nothing but Iptables. If you’re running performance-critical backends, even a 30-second boot time is 30 seconds of downtime you can't afford. In real-time systems, that’s thousands of missed packets and failed requests. I implemented a "warm minimalist" deployment strategy for my backend using a classic Blue-Green approach powered entirely by iptables. Why not just use a proxy? Latency. For this specific use case, adding another hop (like Nginx or HAProxy) was a performance hit I wasn't willing to take. If Kubernetes can use iptables for high-performance traffic redirection, why shouldn't we? The "Gotchas" I encountered: 1. The Sticky Connection Problem: Iptables alone only routes new connections. To force existing flows to the new instance, you have to flush the connection tracking table: conntrack -F. 2. The Internal Routing Loop: If your local services (like a local Nginx) need to talk to the backend, PREROUTING won't catch those packets. You have to apply your rules to the OUTPUT chain as well. 3. The Kernel Bridge: Don't forget net.ipv4.ip_forward=1. It’s easy to reach for heavy-duty orchestrators, but sometimes a well-placed kernel-level rule is all you need for a robust, high-performance swap. I’ve documented the full workflow and the specific commands on the blog: 👉 https://lnkd.in/gmdctR7u #SysAdmin #DevOps #Networking #Iptables #Linux #BackendEngineering #Performance #eGluTech
To view or add a comment, sign in
-
🚀 I built an open-source observability platform from scratch — and I'm finally sharing it. Introducing VaST-Logs — a modern, zero-config observability platform for Linux infrastructure, built with Go, React, ClickHouse, and InfluxDB. As a DevOps engineer, I got tired of stitching together Grafana + Loki + Prometheus + Alertmanager just to answer: "What's wrong with my server right now?" So I built something that does it all, out of the box. 🔍 What VaST-Logs does: • Real-time metrics (CPU, Memory, Disk, Network) at 1s resolution • Smart log discovery — auto-detects Nginx, Apache, Caddy, MySQL, PostgreSQL, Redis, MongoDB & more • Docker container log streaming with auto-discovery • Multi-host fleet dashboard from a single pane of glass • Intelligent alerting with composite monitors (AND/OR logic), ML-based anomaly detection, and browser push notifications • Full incident lifecycle management with auto-creation from alerts • APM & distributed tracing with OTLP support • Cloud SIEM with MITRE ATT&CK mapping • Audit trail for all admin actions — 90-day retention in ClickHouse • PWA support — works on mobile too 🛡️ Security-first: JWT auth, bcrypt password hashing, API key agent auth, RBAC, HTTPS/TLS with self-signed cert support, strict CORS — secure by default. ⚙️ Built over 44 development phases, 305+ commits — from a simple log viewer to a full observability suite. This is 100% self-hosted. No SaaS. No per-seat pricing. Your infra, your data. 🚧 Still actively in development — new features are being added regularly. If you have ideas, suggestions, reporting bugs or want to contribute, the door is wide open! 🔗 GitHub: https://lnkd.in/d_6QyxZZ If you're a DevOps engineer or SRE who's ever wanted a lightweight alternative to the ELK/LGTM stack — give it a try and drop a ⭐ if it helps! Screenshots: https://lnkd.in/dsfUxyvS #DevOps #Observability #OpenSource #Go #Linux #Monitoring #SRE #Infrastructure
To view or add a comment, sign in
-
Built a production-style reverse proxy from scratch — and came away with a much deeper understanding of how HTTPS actually works. I set up a multi-service deployment where Nginx runs on the host machine and routes traffic to a Node.js app and a Flask app, both running inside Docker containers. Architecture Client → Host Nginx (:80/:443) → /service-a → Docker Node.js container → /service-b → Docker Flask container What I built • Path-based reverse proxy routing with host Nginx • Dockerized Node.js and Flask backends using Docker Compose • SSL/TLS certificates issued with Let’s Encrypt via Certbot webroot validation • HTTP → HTTPS redirection with HSTS enforcement • ACME challenge handling for automated domain verification • Rate limiting and security headers: X-Frame-Options X-Content-Type-Options Referrer-Policy • Health check endpoints for service monitoring • Custom error handling for upstream failures Biggest takeaway: SSL is not just “run one command and you’re done.” It requires: • Correct DNS records pointing to your server • Nginx serving the ACME challenge directory properly • Bootstrapping with an HTTP-only config first • Issuing the certificate before switching to the HTTPS config Because if Nginx references a certificate file that does not exist yet, it will not even start. The real learning came from debugging: • 502 Bad Gateway errors • ACME challenge paths not being reachable • Nginx reload failures • Container networking and upstream connectivity issues This project gave me hands-on experience with Linux, Nginx, Docker, networking, SSL/TLS, and deployment troubleshooting — the kind of understanding you only get by building, breaking, and fixing things yourself. Repo: https://lnkd.in/dHXTywyq #DevOps #Nginx #Docker #LetsEncrypt #Certbot #SSL #ReverseProxy #NodeJS #Flask #Linux #Backend #WebInfrastructure #CloudComputing #OpenSource #LearningInPublic
To view or add a comment, sign in
-
-
Ever wanted to securely test volatile code or learn Linux without risking your native OS or downloading heavy Virtual Machines? I am incredibly excited to open-source the project my team and I built this semester: Secure-Sandbox Environment Manager (SSEM)! The Solution: SSEM is a full-stack platform that dynamically provisions isolated Linux desktops (Ubuntu & Kali Linux) directly into your web browser in under 10 seconds. Alongside my teammates Komal K, Manan Katarmal, and Kritika Agrawal, we completely pushed our boundaries in DevOps, networking, and cloud architecture to make this a reality. Key Technical Achievements: ✨ Instant Provisioning: Utilized Docker and Docker Compose to spin up isolated containerized environments dynamically via our Node.js Express API. ✨ Browser-Based Desktop: Integrated noVNC to securely stream the entire Linux desktop GUI directly to the frontend React interface without any external software. ✨ Cloud Deployed: Successfully deployed the full stack on a custom AWS EC2 instance configured with hard-drive SWAP spaces, Nginx, and PM2 for continuous uptime. ✨ Secure & Scalable: Implemented JWT-based authentication, user-specific container constraints, and port collision safety. Fighting container out-of-memory crashes on AWS and configuring dynamic reverse proxies taught us more about modern infrastructure than any textbook ever could! You can watch the video demo below to see a Kali Linux container spin up instantly, and check out our full source code and documentation here: https://lnkd.in/dg6M_Jjn I’d love to hear feedback or answer any architectural questions from the community! 👇 #AWS #Docker #ReactJS #NodeJS #CyberSecurity #CloudComputing #KaliLinux #SoftwareEngineering #DevOps #OpenSource
To view or add a comment, sign in
-
Super proud to share a project we’ve been working on this semester 🚀 Building Secure-Sandbox Environment Manager (SSEM) pushed us beyond just coding — we got hands on with real world DevOps, cloud deployment, and system level problem solving. From handling container crashes to setting up dynamic environments, every challenge taught us something valuable. Big shoutout to my amazing teammates for making this happen Would love for you all to check it out and share your thoughts! #OpenSource #DevOps #CloudComputing #LearningByBuilding
Ever wanted to securely test volatile code or learn Linux without risking your native OS or downloading heavy Virtual Machines? I am incredibly excited to open-source the project my team and I built this semester: Secure-Sandbox Environment Manager (SSEM)! The Solution: SSEM is a full-stack platform that dynamically provisions isolated Linux desktops (Ubuntu & Kali Linux) directly into your web browser in under 10 seconds. Alongside my teammates Komal K, Manan Katarmal, and Kritika Agrawal, we completely pushed our boundaries in DevOps, networking, and cloud architecture to make this a reality. Key Technical Achievements: ✨ Instant Provisioning: Utilized Docker and Docker Compose to spin up isolated containerized environments dynamically via our Node.js Express API. ✨ Browser-Based Desktop: Integrated noVNC to securely stream the entire Linux desktop GUI directly to the frontend React interface without any external software. ✨ Cloud Deployed: Successfully deployed the full stack on a custom AWS EC2 instance configured with hard-drive SWAP spaces, Nginx, and PM2 for continuous uptime. ✨ Secure & Scalable: Implemented JWT-based authentication, user-specific container constraints, and port collision safety. Fighting container out-of-memory crashes on AWS and configuring dynamic reverse proxies taught us more about modern infrastructure than any textbook ever could! You can watch the video demo below to see a Kali Linux container spin up instantly, and check out our full source code and documentation here: https://lnkd.in/dg6M_Jjn I’d love to hear feedback or answer any architectural questions from the community! 👇 #AWS #Docker #ReactJS #NodeJS #CyberSecurity #CloudComputing #KaliLinux #SoftwareEngineering #DevOps #OpenSource
To view or add a comment, sign in
-
If GitHub is your #1 backup priority, the solution needs four essentials: Aggressive cadence (RPO): Nightly backups are too slow. Aim for hourly at minimum; 30 minutes for critical repos. Verify whether cadence covers the entire repo or just code. Full-surface coverage: Code alone isn’t enough. You need issues, PRs, discussions, Actions workflows, releases, LFS, branch protections, and org settings—otherwise recovery drags from hours to days. Immutability: Assume credential compromise. Backups must be air-gapped or object-locked so they can’t be deleted with the source. Tested recovery: Do full restore drills quarterly. Measure real recovery time—untested backups are just assumptions. Check out how HYCU, Inc. help protect your GitHub with enterprise-grade backup and recovery.
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
If you're starting your DevOps journey, I highly recommend KodeKloud for hands-on labs 👇https://lnkd.in/deg5ZDcV