🚀 Challenge #100DaysOfDevOps by KodeKloud | Day 14 🔍 Day 14: Linux Process Troubleshooting Today’s lab was all about identifying and fixing a real-world service issue in a production-like environment. 🧠 What I worked on: I investigated an Apache service outage reported on one of the application servers in Stratos DC. The goal was to ensure Apache was running on all app servers and correctly configured on port 8085. ⚙️ Steps I followed: I connected to each application server using SSH from the jump host. I checked Apache service status using systemctl status httpd. I identified the faulty server where Apache was failing to start. I analyzed the error logs and found a port conflict issue. Using ss -tulnp, I discovered that another process (sendmail) was already using port 8085. I stopped and disabled the conflicting service (sendmail). I verified and updated Apache configuration to use port **8085`. I restarted Apache and confirmed it was running successfully. I repeated verification across all servers to ensure consistency. ❗ Issue I faced: Apache service was failing due to “Address already in use”, caused by another service occupying the required port. ✅ How I resolved it: I freed the port by stopping the conflicting process and ensured Apache was properly configured and running on port 8085 across all servers. 📌 Key Learnings: Always check for port conflicts when a service fails to start Use tools like ss or netstat to identify running processes Logs and error messages are the fastest way to diagnose issues Consistency across servers is critical in distributed environments 💡 This task gave me a strong understanding of Linux process management and real-time troubleshooting in DevOps environments. If you want to start your cloud and DevOps journey with KodeKloud (highly recommended if you want real hands-on learning): https://lnkd.in/deg5ZDcV #DevOps #Linux #Troubleshooting #Apache #KodeKloud #100DaysOfDevOps #LearningJourney
Linux Process Troubleshooting with Apache on Port 8085
More Relevant Posts
-
🚀 Challenge #100DaysOfDevOps by KodeKloud | Day 12 Today’s challenge was all about troubleshooting Apache service issues in a real-world scenario — and it turned out to be a great learning experience. 🔍 What I worked on: I was given a situation where the Apache service was not reachable on port 5001. Instead of jumping to conclusions, I followed a structured debugging approach. 🛠️ Steps I took: I checked the Apache service status and found it was failing to start. I analyzed the logs and discovered a port conflict issue. Using ss, I identified that sendmail was already using port 5001. I stopped and disabled the sendmail service to free the port. After that, I successfully started Apache and confirmed it was running. Next, I verified that Apache was listening on all interfaces. While testing from the jump host, I faced a network issue (No route to host). I investigated iptables and found restrictive rules blocking traffic. Finally, I allowed port 5001 in iptables and validated the fix using curl. 💡 Key Learnings: Always read error logs carefully — they often point directly to the issue. Port conflicts are a common but critical problem in server setups. Troubleshooting is not just about services, but also networking and firewall rules. A step-by-step approach saves time and avoids confusion. ✅ Outcome: Apache is now up and accessible on port 5001 from the jump host. Every day in this challenge is making me more confident in handling real DevOps scenarios. Looking forward to the next one! 🔥 If you're starting your DevOps journey, I highly recommend KodeKloud for hands-on labs 👇 https://lnkd.in/deg5ZDcV #DevOps #Linux #Apache #Troubleshooting #Networking #LearningJourney #KodeKloud
To view or add a comment, sign in
-
-
It is not everyday companies choose to open source something that they built with a lot of efforts but it gives us immense pleasure to give back to the open source community. VaSTLogs is a different kind of tool which is not limited to logging platform but it focuses on every need of a SRE and DevOps Engineer. Prakash K. Lakhara did any amazing job building this
🚀 I built an open-source observability platform from scratch — and I'm finally sharing it. Introducing VaST-Logs — a modern, zero-config observability platform for Linux infrastructure, built with Go, React, ClickHouse, and InfluxDB. As a DevOps engineer, I got tired of stitching together Grafana + Loki + Prometheus + Alertmanager just to answer: "What's wrong with my server right now?" So I built something that does it all, out of the box. 🔍 What VaST-Logs does: • Real-time metrics (CPU, Memory, Disk, Network) at 1s resolution • Smart log discovery — auto-detects Nginx, Apache, Caddy, MySQL, PostgreSQL, Redis, MongoDB & more • Docker container log streaming with auto-discovery • Multi-host fleet dashboard from a single pane of glass • Intelligent alerting with composite monitors (AND/OR logic), ML-based anomaly detection, and browser push notifications • Full incident lifecycle management with auto-creation from alerts • APM & distributed tracing with OTLP support • Cloud SIEM with MITRE ATT&CK mapping • Audit trail for all admin actions — 90-day retention in ClickHouse • PWA support — works on mobile too 🛡️ Security-first: JWT auth, bcrypt password hashing, API key agent auth, RBAC, HTTPS/TLS with self-signed cert support, strict CORS — secure by default. ⚙️ Built over 44 development phases, 305+ commits — from a simple log viewer to a full observability suite. This is 100% self-hosted. No SaaS. No per-seat pricing. Your infra, your data. 🚧 Still actively in development — new features are being added regularly. If you have ideas, suggestions, reporting bugs or want to contribute, the door is wide open! 🔗 GitHub: https://lnkd.in/d_6QyxZZ If you're a DevOps engineer or SRE who's ever wanted a lightweight alternative to the ELK/LGTM stack — give it a try and drop a ⭐ if it helps! Screenshots: https://lnkd.in/dsfUxyvS #DevOps #Observability #OpenSource #Go #Linux #Monitoring #SRE #Infrastructure
To view or add a comment, sign in
-
🚀 I built an open-source observability platform from scratch — and I'm finally sharing it. Introducing VaST-Logs — a modern, zero-config observability platform for Linux infrastructure, built with Go, React, ClickHouse, and InfluxDB. As a DevOps engineer, I got tired of stitching together Grafana + Loki + Prometheus + Alertmanager just to answer: "What's wrong with my server right now?" So I built something that does it all, out of the box. 🔍 What VaST-Logs does: • Real-time metrics (CPU, Memory, Disk, Network) at 1s resolution • Smart log discovery — auto-detects Nginx, Apache, Caddy, MySQL, PostgreSQL, Redis, MongoDB & more • Docker container log streaming with auto-discovery • Multi-host fleet dashboard from a single pane of glass • Intelligent alerting with composite monitors (AND/OR logic), ML-based anomaly detection, and browser push notifications • Full incident lifecycle management with auto-creation from alerts • APM & distributed tracing with OTLP support • Cloud SIEM with MITRE ATT&CK mapping • Audit trail for all admin actions — 90-day retention in ClickHouse • PWA support — works on mobile too 🛡️ Security-first: JWT auth, bcrypt password hashing, API key agent auth, RBAC, HTTPS/TLS with self-signed cert support, strict CORS — secure by default. ⚙️ Built over 44 development phases, 305+ commits — from a simple log viewer to a full observability suite. This is 100% self-hosted. No SaaS. No per-seat pricing. Your infra, your data. 🚧 Still actively in development — new features are being added regularly. If you have ideas, suggestions, reporting bugs or want to contribute, the door is wide open! 🔗 GitHub: https://lnkd.in/d_6QyxZZ If you're a DevOps engineer or SRE who's ever wanted a lightweight alternative to the ELK/LGTM stack — give it a try and drop a ⭐ if it helps! Screenshots: https://lnkd.in/dsfUxyvS #DevOps #Observability #OpenSource #Go #Linux #Monitoring #SRE #Infrastructure
To view or add a comment, sign in
-
🚀 Built a Production-Ready SSL Auto Renewal System using Certbot This is an automated SSL renewal script designed for real production environments. 🔧 Key Features: * Auto OS detection (Ubuntu, Debian, RHEL, etc.) * Nginx / Apache auto detection * Smart firewall handling (UFW / firewalld) * Multi-domain certificate support * Zero-downtime reload * Safe & idempotent execution 💡 It automatically: * Opens required ports temporarily * Renews certificates only when needed * Reloads services only if certificates are updated * Cleans up firewall rules after execution 📦 GitHub: https://lnkd.in/giTkck36 This is useful for DevOps engineers managing multiple domains in production. Would love feedback or suggestions 🙌 #DevOps #Linux #Certbot #Automation #Nginx #Apache #Cloud #SRE
To view or add a comment, sign in
-
I stopped configuring servers manually and automated the whole thing with Ansible. One of the biggest advantages of Infrastructure as Code is being able to configure multiple servers consistently, repeatedly, and without manual drift. In this lab, I used Ansible on WSL to provision and configure two AWS Ubuntu servers with different web stacks: Nginx server with basic authentication Apache server serving a custom HTML page Wireshark installed on both servers for network tooling A main Ansible playbook to orchestrate the entire deployment Instead of configuring each server manually, I automated the entire workflow using Ansible playbooks. What I implemented I created a structured Ansible project with: Inventory file for host targeting Separate playbooks for: Nginx Apache Wireshark A main.yml file to run everything in sequence Templates for reusable configuration Custom HTML deployment to both web servers Key tasks completed Verified SSH connectivity with Ansible Installed and configured Nginx Enabled basic authentication on Nginx Installed and configured Apache2 Deployed custom web content Installed Wireshark on both instances Orchestrated the full deployment with a master playbook Real issues I ran into (and fixed) This project was not just “run and done” — I had to troubleshoot real automation issues along the way: Invalid playbook variable placement Missing Python library dependency (passlib) Incorrect use of import_playbook inside tasks YAML formatting / indentation errors Fixing those issues reinforced something important: In DevOps, writing automation is one thing. Writing automation that is repeatable, reliable, and debuggable is the real skill. What this project reinforced for me This lab helped me strengthen practical skills in: Configuration management Infrastructure automation Ansible playbook structure Server provisioning Troubleshooting deployment failures Reducing manual setup across environments The goal is always the same: Less manual work. More consistency. Better repeatability. That’s the kind of workflow I’m building toward. Link to project Repo:https://lnkd.in/esqwUdzs #Ansible #DevOps #AWS #Automation #InfrastructureAsCode #Linux #Nginx #Apache #CloudEngineering #SystemAdministration #ConfigurationManagement The Pistis Tech Hub
To view or add a comment, sign in
-
-
DevOps can look very polished from the outside. • Cloud dashboards • Automated pipelines • Clean web interfaces • Seamless deployments Everything feels fast, modern, and under control 🚀 Until production breaks. And then… everything shifts back to fundamentals: • SSH into servers • Dig through /var/log • Run Linux commands to trace issues • Write quick Bash scripts to patch things up That’s when the reality becomes clear— No matter how advanced the stack is, it still runs on: • Linux • Bash • CLI tools These aren’t flashy. They don’t have dashboards. But they are the backbone of everything we build. At the end of the day, when systems fail, it’s not the UI that saves you — it’s your fundamentals. Takeaway: You can ignore Linux and Bash early on, but in real-world DevOps… the terminal is inevitable. #DevOps #Linux #Bash #CloudComputing #AWS #Automation #CloudEngineer #TechJourney
To view or add a comment, sign in
-
-
Devops Hands-on practice with KodeKloud Completed real world task ✨ ✨ Task: Install iptables with persistent rules and block incoming traffic to port 6000 for everyone except load balancer host ➡️ Install Iptables "sudo yum install iptables iptables-services -y" ➡️ Enable and start the iptables "sudo systemctl enable iptables" "sudo systemctl start iptables" ➡️ Rules wil be deleted after a reboot, Save the iptables rules to make persistent "sudo /usr/libexec/iptables/iptables.init save" ➡️ Check the existing rules and add the required rules as per the priority "sudo iptables -L INPUT -n --line-numbers" ➡️ If any rule to reject traffic from everyone is available add the Allow and block 6000 port rule with high priority than the reject rule If reject rule has 7 priority, Allow and block rules should be added before that i.e; 5 and 6 ➡️ Add the rule with priority 5 to allow traffic only from load balancer server "sudo iptables -I INPUT 5 -p tcp --dport 6000 -s load-balancer-host-name -j ACCEPT" ➡️ Add the rule with priority 6 to block everyone "sudo iptables -I INPUT 6 -p tcp --dport 6000 -j DROP" ➡️ Save the added Rules "sudo iptables-save | sudo tee/etc/sysconfig/iptables #Devops #Linux #DevopsEngineer #Learning #Kodekloud
To view or add a comment, sign in
-
One of the most powerful aspects of Linux is how efficiently it handles **search operations and file permissions**. In real-world DevOps and server administration, these two concepts are used almost every day. 🔍 **Search Commands in Linux** Finding files, logs, or configurations quickly is crucial while working on servers. Some of the most commonly used commands are: `find` → Used to search files and directories based on name, type, size, or modified time Example: `find /home -name "*.log"` `grep` → Used to search for a specific word, pattern, or text inside files Example: `grep "error" application.log` `locate` → Quickly finds file paths from the system database Example: `locate nginx.conf` These commands make troubleshooting and log analysis much faster in production environments. 🔐 **Permission Commands in Linux** Linux permissions decide **who can read, write, or execute a file**. Each permission has a numeric value: `r = 4` → Read permission `w = 2` → Write permission `x = 1` → Execute permission These values are added together to form permission codes. For example: `7 = 4 + 2 + 1` → `rwx` `6 = 4 + 2` → `rw-` `5 = 4 + 1` → `r-x` `4 = 4` → `r--` So when we use: `chmod 755 file.sh` It means: Owner → `7` = `rwx` Group → `5` = `r-x` Others → `5` = `r-x` Understanding permissions is essential for security, script execution, and access control in Linux-based environments. Linux is not just about commands — it’s about control, security, and efficiency. #Linux #DevOps #CloudComputing #AWS #SystemAdministration #ServerManagement #Automation #SoftwareEngineering #Infrastructure #LinuxCommands #CareerGrowth #Technology #DevopsWithMultiCloud #flm #frontlinesmedia
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Well-done Vishal