🚀 Day 44 of #100DaysOfDevOps Today’s task was to host a static website using a containerized platform with Docker Compose. The Nautilus DevOps team provided the requirements, and I set up the environment accordingly. ✅ Requirements: Use httpd:latest image Container must be named httpd Map host port 6100 → container port 80 Map host volume /opt/devops → /usr/local/apache2/htdocs 🔹 Step 1: Create the compose directory & file mkdir -p /opt/docker cd /opt/docker vi docker-compose.yml 🔹 Step 2: Add the compose configuration version: "3.9" services: webserver: image: httpd:latest container_name: httpd ports: - "6100:80" volumes: - /opt/devops:/usr/local/apache2/htdocs 🔹 Step 3: Deploy with Docker Compose docker compose -f /opt/docker/docker-compose.yml up -d\ 🔹 Step 4: Verify & test docker ps | grep httpd curl -I http://localhost:6100/\ ✅ Static website content from /opt/devops is now being served via Apache on http://localhost:6100 💡 Key takeaway: Docker Compose makes it straightforward to declare container setup in YAML, making the environment reproducible and portable — crucial for team collaboration and future deployments. #100DaysOfDevOps #Docker #DockerCompose #Containers #DevOps #Automation #Linux
Hosted static website with Docker Compose on Day 44 of #100DaysOfDevOps
More Relevant Posts
-
🚀 Day 44 of #100DaysOfDevOps – Writing a Docker Compose File “Experience is the name everyone gives to their mistakes.” — Oscar Wilde Today’s learning hit right where it counts — attention to detail. The task was to create a Docker Compose file for hosting a static website using Apache (httpd). It seemed simple… until I hit a persistent YAML error. I double-checked syntax, reviewed every key, and even reinstalled Compose — but the error remained. After a solid debugging session, I realized the issue was nothing more than an indentation error. Yes, a few misplaced spaces broke the entire automation! That moment reinforced a powerful DevOps truth — YAML is unforgiving, and even minor formatting issues can halt your deployment pipeline. Precision is everything. Here’s the clean, working docker-compose.yml I ended up with: version: '3' services: web: image: httpd:latest container_name: httpd ports: - "8083:80" volumes: - /opt/security:/usr/local/apache2/htdocs This setup successfully launched the httpd container with port mapping and volume configuration in a single command: docker compose up -d Today’s insight: 👉 Small mistakes teach big lessons. In DevOps, precision and patience are as important as code and automation. #Day44 #100DaysOfDevOps #Docker #DockerCompose #DevOps #Automation #Containerization #YAML #Apache #WebServer #LearningInPublic #Debugging #ProblemSolving #Linux #InfrastructureAsCode #SRE #DevOpsCulture #CloudComputing #EngineeringExcellence #BuildInPublic #ContinuousLearning
To view or add a comment, sign in
-
-
☸️ Your First Kubernetes Deployment (10-Minute Tutorial) Last week I promised hands-on K8s. Let's deploy a real application. 𝗪𝗵𝗮𝘁 𝘄𝗲'𝗿𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴: nginx web server with 3 replicas 𝗣𝗿𝗲𝗿𝗲𝗾𝘂𝗶𝘀𝗶𝘁𝗲𝘀: - Minikube + kubectl installed - 10 minutes of your time 𝗧𝗵𝗲 𝟴-𝗦𝘁𝗲𝗽 𝗣𝗿𝗼𝗰𝗲𝘀𝘀: 1️⃣ Start cluster with minikube start 2️⃣ Create deployment YAML (defines 3 nginx replicas) 3️⃣ Deploy with kubectl apply -f nginx-deployment.yaml 4️⃣ Create service YAML (exposes on NodePort 30080) 5️⃣ Apply service and access via browser 6️⃣ Test self-healing: Delete a pod, watch K8s recreate it automatically 7️⃣ Scale to 5 replicas in seconds: kubectl scale deployment --replicas=5 8️⃣ Rolling update with zero downtime: kubectl set image 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗷𝘂𝘀𝘁 𝗹𝗲𝗮𝗿𝗻𝗲𝗱: ✅ Deployments & pod management ✅ Services & networking ✅ Horizontal scaling ✅ Automatic self-healing ✅ Zero-downtime updates 𝗪𝗮𝗻𝘁 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗰𝗼𝗱𝗲? I've created a GitHub gist with all YAML files and commands. Comment "K8S" and I'll send you the link. 𝗡𝗲𝘅𝘁 𝗧𝗵𝘂𝗿𝘀𝗱𝗮𝘆: ConfigMaps, Secrets, and Persistent Storage Hit any errors? Drop them below 👇 #Kubernetes #K8s #DevOps #CloudNative #Tutorial
To view or add a comment, sign in
-
-
From Local Development to Live Deployment — My Nginx Reverse Proxy Journey Over the past few days, I successfully deployed a complete web application using Nginx as a Reverse Proxy, Docker, and an automated deployment workflow. This project taught me how every piece fits together — from writing code locally to making it accessible online through a live server. Step 1: Application Setup I containerized the application using Docker, ensuring it could run consistently across environments. This formed the foundation for a stable and portable deployment. Step 2: Server & Reverse Proxy Configuration Next, I configured Nginx as a reverse proxy to forward incoming traffic from port 80 to the running Docker container. This setup made the application accessible via the server’s public IP while maintaining security and scalability. Step 3: Automated Deployment Process To streamline future updates, I developed an automated deployment script that handles everything — from cloning the repository to rebuilding Docker containers and restarting Nginx. This approach reduced manual effort and minimized configuration errors, making deployments smoother and repeatable. Step 4: Visualization I created an architecture diagram using Draw.io, illustrating the workflow: GitHub → Bash Deployment → Ubuntu Server → Nginx Reverse Proxy → Dockerized Application This visual representation helped clarify how requests flow through the system — from version control to live application. Step 5: Testing & Optimization Finally, I verified that both Nginx and Docker containers were running properly, ensuring a stable and responsive deployment. Key Takeaways: * Nginx plays a vital role in load distribution and request handling. * Docker simplifies deployment consistency. * Automation scripts are invaluable for error-free CI/CD workflows. * Visualization enhances understanding and troubleshooting. Tools & Technologies: Docker | Nginx | Git Bash | Ubuntu | GitHub Documentation: https://lnkd.in/dfrK_32T Repository: https://lnkd.in/djAninN6 #DevOps #Docker #Nginx #Automation #LearningByDoing #Deployment #GitHub #Bash #CloudEngineering #ContinuousIntegration
To view or add a comment, sign in
-
-
🚀 𝗔𝗻𝘀𝗶𝗯𝗹𝗲 𝗶𝗻 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗲: 𝟭𝟴 𝗠𝗼𝗱𝘂𝗹𝗲𝘀 𝗘𝘃𝗲𝗿𝘆 𝗗𝗲𝘃𝗢𝗽𝘀/𝗜𝗧 𝗧𝗲𝗮𝗺 𝗦𝗵𝗼𝘂𝗹𝗱 𝗞𝗻𝗼𝘄 I’ve been standardizing app rollouts using these Ansible workhorses—clean, idempotent, and Git-driven: 𝗕𝗮𝘀𝗶𝗰𝘀 & 𝗖𝗵𝗲𝗰𝗸𝘀 1. ping – quick connectivity sanity. 2. setup – gather system facts for smart conditionals. 3. debug – print variables clearly during runs. 4. stat – check file existence/attrs before acting. 𝗙𝗶𝗹𝗲𝘀 & 𝗖𝗼𝗻𝗳𝗶𝗴 1. file – create dirs/symlinks, set perms, touch files. 2. copy – ship configs/assets (with validate= to prevent bad pushes). 3. replace – safe regex edits vs brittle sed lines. 4. fetch – pull logs/artifacts back to controller. 𝗨𝘀𝗲𝗿𝘀 & 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 1. user – create/lock/remove users; manage groups. 2. service – start/enable/restart services predictably. 𝗣𝗮𝗰𝗸𝗮𝗴𝗲𝘀 & 𝗖𝗼𝗱𝗲 1. apt / yum – package installs across Debian/RedHat families. 2. git – pin repos to tags/commits for repeatable deploys. 𝗦𝗵𝗲𝗹𝗹 𝘃𝘀 𝗖𝗼𝗺𝗺𝗮𝗻𝗱 1. command – simple binaries, safer, no shell. 2. shell – only when you need pipes, redirects, here-docs (use executable=/bin/bash + set -o pipefail). 𝗛𝗧𝗧𝗣 & 𝗜𝗻𝗰𝗹𝘂𝗱𝗲𝘀 1. uri – call APIs / health checks. 2. include_tasks / import_tasks – compose roles cleanly; DRY your playbooks. 𝗖𝗼𝗻𝘁𝗮𝗶𝗻𝗲𝗿𝘀 (𝗰𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆.𝗱𝗼𝗰𝗸𝗲𝗿) 1. docker_login – auth to registries. 2. docker_container – run/upgrade services with restart policies and mapped ports. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 Fewer 3 AM fixes. Safer rollouts. Auditable changes. And infra that’s actually boring (in a good way). #Ansible #DevOps #Automation #SRE #PlatformEngineering #Docker #InfrastructureAsCode #Linux #Kubernetes #NULogic
To view or add a comment, sign in
-
-
🚀 Demo: Jenkins & Grafana Behind Nginx Reverse Proxy (Docker Setup) Just wrapped up another challenge from the Daily DevOps/SRE Challenge Series, diving deep into Nginx and key reverse proxy concepts. 🎬 Demo video: I built a local environment where Nginx acts as a reverse proxy for two Dockerized applications — Jenkins and Grafana — both running securely over HTTPS on custom domains. 🔹 https://jenkins.local → Jenkins (CI/CD) 🔹 https://grafana.local → Grafana (Monitoring) Each service runs inside its own Docker container, while Nginx handles SSL termination and traffic routing for a clean, production-like setup — all running locally. 🔧 What I implemented: Configured Nginx as a reverse proxy Added HTTPS via self-signed certificates Routed traffic to Docker containers on separate ports Built a unified, secure local DevOps environment 💡 Concepts covered: Reverse proxy architecture SSL/TLS setup and certificate handling Docker networking and service isolation Practical local automation workflow This challenge helped me strengthen my DevOps fundamentals while replicating production-grade traffic flow locally. 📚 Challenge Link: https://lnkd.in/gGRsDQ3x #DevOps #SRE #Nginx #Docker #Jenkins #Grafana #ReverseProxy #Networking #CloudEngineering
To view or add a comment, sign in
-
🐳 Docker Pulls Slow? Here’s a small fix that actually works. Most of us have accepted slow Docker pulls as “normal.” You hit docker pull, take a sip of chai, and hope it finishes before the next meeting. But there is a practical way to speed things up: registry mirrors. They’re not magic. They just reduce latency. But the improvement is noticeable — especially on big images. 🔧 Terminal Method (Linux servers / WSL / dev machines) Edit: /etc/docker/daemon.json Add: { "registry-mirrors": [ "https://mirror.gcr.io", "https://lnkd.in/dskH7sJg" ] } Restart Docker: sudo systemctl daemon-reload sudo systemctl restart docker 🖱️ GUI Method (Docker Desktop) Sometimes you don’t want to open a terminal just to change a config. Fair. Steps: 1. Open Docker Desktop 2. Go to Settings → Docker Engine 3. In the JSON config, add: "registry-mirrors": [ "https://mirror.gcr.io", "https://lnkd.in/dskH7sJg" ] (Just place it inside the top-level { }) 4. Click Apply & Restart Docker restarts… slightly offended, but much faster after. ⚡ What changes? Pulls feel noticeably quicker CI pipelines wait less Your workflow feels a little smoother And yes, your team may finally stop saying “Docker is slow today” (…at least for a while) Not the most glamorous tip, but definitely one of the most useful ones. If you found this helpful, a 👍 will do. If you have other micro-optimizations, drop them in the comments — we all learn from them. #Docker #DevOps #Engineering #CloudNative #DeveloperTools #ProductivityTips
To view or add a comment, sign in
-
Ever deployed without a single second of downtime? That was my mission some few weeks ago, to make updates feel invisible to users. When I started working on the project, I wanted more than just to “get it working.” I wanted to truly understand how zero-downtime deployment happens in real systems, the kind that can take a hit and still keep serving users flawlessly. Over a few late nights, I built a Blue/Green deployment architecture powered by Nginx and Docker Compose, with: Two identical Node.js services; Blue (active) & Green (backup) Automatic failover that ensures no failed client requests Environment-based configuration using .env Chaos testing to simulate errors and confirm instant switchovers Seeing Nginx detect a failure and instantly route traffic to the backup server was one of those “this is why I love DevOps” moments. Tech Stack: Nginx · Docker Compose · Node.js · Bash · envsubst This project taught me the power of automation, failover resilience, and clean orchestration, the same principles that keep production systems alive at scale. 🔗 Explore the repo here: 👉 https://lnkd.in/d76NJvYc #DevOps #Docker #Nginx #BlueGreenDeployment #CloudComputing #ZeroDowntime #HNG #Infrastructure #Automation
To view or add a comment, sign in
-
Decided to do what I do best — build and automate. I recently created a simple CI/CD pipeline using Jenkins and Docker that automatically builds and pushes Docker images to DockerHub whenever I push new code to GitHub. The pipeline is fully automated with webhook triggers, so Jenkins instantly detects updates from GitHub, builds a fresh Docker image, tags it with the build number, and pushes it to my DockerHub repository (no manual steps needed). This little helped me sharpen what I enjoy most: - Automating workflows with Jenkins - Using Docker for consistent builds - Managing secure credentials and image tagging - Webhook Triggers for CI/CD Automation and pipeline scripting You can check out the project and pipeline diagram here 🔗 https://lnkd.in/eiSkthvi #DevOps #Jenkins #Docker #Automation #CI_CD #ContinuousIntegration #ContinuousDeployment
To view or add a comment, sign in
-
🚀 Automating Deployment with Jenkins & GitHub Actions! In my latest project, I implemented a CI/CD pipeline using both Jenkins and GitHub Actions — combining the best of both worlds to achieve seamless automation from code commit to deployment. 🔧 Here’s what I did: Used GitHub Actions for continuous integration — automatically building, testing, and validating every pull request. Integrated Jenkins for continuous deployment — managing the delivery pipeline, containerizing the application using Docker, and deploying to AWS with zero downtime. Configured webhooks between GitHub and Jenkins to trigger builds on every new commit. Ensured security and scalability by managing credentials with AWS Secrets Manager and using Nginx caching for optimized frontend performance. 💡 Key takeaway: CI/CD isn’t just about automation — it’s about speed, reliability, and developer confidence. With this setup, every code change now flows smoothly from development to production with minimal manual intervention. 🌐 Tech stack: React + Vite + Tailwind CSS | Firebase | Docker | Jenkins | GitHub Actions | AWS (S3 & EC2) #DevOps #CICD #Jenkins #GitHubActions #Automation #Docker #AWS #SoftwareEngineering #WebDevelopment #CloudComputing
To view or add a comment, sign in
-
🚀Just wrapped up a personal project - Vanilla Kubernetes (K8s) cluster automation! This time, I decided to go fully from scratch - no managed Kubernetes (AKS/EKS) and no lightweight distros like k3s or microk8s. Instead, I went with a true vanilla Kubernetes setup, provisioned using Multipass VMs and fully automated with Ansible and bash scripts. 💡A few firsts for me: * My first Vanilla Kubernetes cluster * My first experience using Ansible for automation ⚙️The project supports: * High availability setup with multiple masters and workers * Flexible node count - adapt to your available resources * Full automation: from VM provisioning to cluster initialization and CNI setup * Option to choose from 3 popular CNIs — Calico, Cilium, or Flannel * Single source of truth for all environment variables, parameters, and arguments - making customization clean and consistent across scripts and playbooks 🔗 Check it out here: 👉 [GitHub - vanilla-k8s](https://lnkd.in/g7aChZpS) #vanillak8s #ha #multipass #ansible #iac #devops
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
if the docker compose yaml is in the same directory you can simply run docker compose build & docker compose up -d instead of -f