🚀 Day 44 of #100DaysOfDevOps – Writing a Docker Compose File “Experience is the name everyone gives to their mistakes.” — Oscar Wilde Today’s learning hit right where it counts — attention to detail. The task was to create a Docker Compose file for hosting a static website using Apache (httpd). It seemed simple… until I hit a persistent YAML error. I double-checked syntax, reviewed every key, and even reinstalled Compose — but the error remained. After a solid debugging session, I realized the issue was nothing more than an indentation error. Yes, a few misplaced spaces broke the entire automation! That moment reinforced a powerful DevOps truth — YAML is unforgiving, and even minor formatting issues can halt your deployment pipeline. Precision is everything. Here’s the clean, working docker-compose.yml I ended up with: version: '3' services: web: image: httpd:latest container_name: httpd ports: - "8083:80" volumes: - /opt/security:/usr/local/apache2/htdocs This setup successfully launched the httpd container with port mapping and volume configuration in a single command: docker compose up -d Today’s insight: 👉 Small mistakes teach big lessons. In DevOps, precision and patience are as important as code and automation. #Day44 #100DaysOfDevOps #Docker #DockerCompose #DevOps #Automation #Containerization #YAML #Apache #WebServer #LearningInPublic #Debugging #ProblemSolving #Linux #InfrastructureAsCode #SRE #DevOpsCulture #CloudComputing #EngineeringExcellence #BuildInPublic #ContinuousLearning
Dhanush Boopathi’s Post
More Relevant Posts
-
🚀 Day 44 of #100DaysOfDevOps Today’s task was to host a static website using a containerized platform with Docker Compose. The Nautilus DevOps team provided the requirements, and I set up the environment accordingly. ✅ Requirements: Use httpd:latest image Container must be named httpd Map host port 6100 → container port 80 Map host volume /opt/devops → /usr/local/apache2/htdocs 🔹 Step 1: Create the compose directory & file mkdir -p /opt/docker cd /opt/docker vi docker-compose.yml 🔹 Step 2: Add the compose configuration version: "3.9" services: webserver: image: httpd:latest container_name: httpd ports: - "6100:80" volumes: - /opt/devops:/usr/local/apache2/htdocs 🔹 Step 3: Deploy with Docker Compose docker compose -f /opt/docker/docker-compose.yml up -d\ 🔹 Step 4: Verify & test docker ps | grep httpd curl -I http://localhost:6100/\ ✅ Static website content from /opt/devops is now being served via Apache on http://localhost:6100 💡 Key takeaway: Docker Compose makes it straightforward to declare container setup in YAML, making the environment reproducible and portable — crucial for team collaboration and future deployments. #100DaysOfDevOps #Docker #DockerCompose #Containers #DevOps #Automation #Linux
To view or add a comment, sign in
-
☸️ Your First Kubernetes Deployment (10-Minute Tutorial) Last week I promised hands-on K8s. Let's deploy a real application. 𝗪𝗵𝗮𝘁 𝘄𝗲'𝗿𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴: nginx web server with 3 replicas 𝗣𝗿𝗲𝗿𝗲𝗾𝘂𝗶𝘀𝗶𝘁𝗲𝘀: - Minikube + kubectl installed - 10 minutes of your time 𝗧𝗵𝗲 𝟴-𝗦𝘁𝗲𝗽 𝗣𝗿𝗼𝗰𝗲𝘀𝘀: 1️⃣ Start cluster with minikube start 2️⃣ Create deployment YAML (defines 3 nginx replicas) 3️⃣ Deploy with kubectl apply -f nginx-deployment.yaml 4️⃣ Create service YAML (exposes on NodePort 30080) 5️⃣ Apply service and access via browser 6️⃣ Test self-healing: Delete a pod, watch K8s recreate it automatically 7️⃣ Scale to 5 replicas in seconds: kubectl scale deployment --replicas=5 8️⃣ Rolling update with zero downtime: kubectl set image 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂 𝗷𝘂𝘀𝘁 𝗹𝗲𝗮𝗿𝗻𝗲𝗱: ✅ Deployments & pod management ✅ Services & networking ✅ Horizontal scaling ✅ Automatic self-healing ✅ Zero-downtime updates 𝗪𝗮𝗻𝘁 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗰𝗼𝗱𝗲? I've created a GitHub gist with all YAML files and commands. Comment "K8S" and I'll send you the link. 𝗡𝗲𝘅𝘁 𝗧𝗵𝘂𝗿𝘀𝗱𝗮𝘆: ConfigMaps, Secrets, and Persistent Storage Hit any errors? Drop them below 👇 #Kubernetes #K8s #DevOps #CloudNative #Tutorial
To view or add a comment, sign in
-
-
🚀 DevOps 365 Days Challenge - Day 76 🚀 Troubleshooting Nginx + PHP-FPM in Kubernetes 🔧 Today's challenge involved debugging and fixing a broken Nginx and PHP-FPM setup running in a Kubernetes pod. This is a common real-world scenario that every DevOps engineer encounters! 🎯 The Challenge: Pod: nginx-phpfpm (halted functionality) Two containers: nginx-container + php-fpm-container ConfigMap: nginx-config (containing misconfigurations) Task: Identify the issue, fix it, and deploy the application 📚 Documentation Created: Complete troubleshooting guide (README.md) Step-by-step command reference (commands.md) Configuration examples and best practices 💡 Key Takeaway: Always verify your fastcgi_pass configuration and ensure SCRIPT_FILENAME parameter is correctly set when working with Nginx and PHP-FPM. Small misconfigurations can halt entire applications! 🔗 Full solution and documentation: https://lnkd.in/gc5trHum #DevOps #Kubernetes #K8s #Nginx #PHPFPM #CloudNative #ContainerOrchestration #DevOps365Days #Day76 #LearningInPublic #TechChallenge #SRE #CloudComputing #Docker #Troubleshooting #InfrastructureAsCode #DevOpsCommunity #TechLearning #ContinuousLearning
To view or add a comment, sign in
-
Kubernetes Fundamentals #6 Continuing my Kubernetes series - this one is a hands on exploration of the concepts we have discussed so far. You can watch the video and also refer to this post for list of commands. How to run Kubernetes locally? If you’re using Docker Desktop, you already have everything you need. Just enable Kubernetes in the settings, it sets up a single-node cluster for you automatically. We will be using kubectl. It's the command-line tool that lets you talk to your Kubernetes cluster. kubectl sends a request to the Kubernetes API Server, which returns what’s running in the cluster. You can use it to: 1. Create or delete objects (Pods, Deployments, Services, etc.) 2. Inspect the cluster state 3. Debug containers or Pods 4. Apply YAML configuration files To verify your setup, open your terminal and run: kubectl version --client Then check if it can talk to the cluster: kubectl cluster-info Check your cluster and nodes: kubectl get nodes What you see shows your single-node local cluster — acting as both Control Plane and Worker Node. Next: kubectl get pods -A If you see docker-desktop and STATUS: Ready - you’re in! Now try this: kubectl create deployment nginx --image=nginx kubectl expose deployment nginx --port=80 --type=NodePort kubectl get services Then open your browser at: http://localhost:30007 You’ll see the NGINX welcome page. Clean up kubectl delete service nginx kubectl delete deployment nginx That’s it. Your first Kubernetes app, deployed locally. #Kubernetes #DevOps #CloudNative #Containers #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
5 Dockerfile mistakes bloating your image size 90%. It's Tuesday sprint planning. The build pipeline is red again. Image push to AWS ECR times out, failing the deployment for the 3rd time. You check the Jenkins logs. But did you check the 800MB image size for a simple Golang service? After shrinking 50+ microservice images, here's what works: 1. Ditch single-stage builds Use multi-stage builds to separate the build environment from the final runtime image. We took our Spring Boot payment API from a 1.2GB monster with the full JDK down to a lean 150MB image using a JRE-slim base. 2. Ignore your .dockerignore Forgetting this sends your entire directory including node_modules and .git to the Docker daemon. Properly configuring it cut our build context upload time by 75% for the checkout service, speeding up developer feedback loops. 3. Mess up your layer order Order commands from least to most frequently changed to maximize Docker's layer cache. Placing COPY pom.xml . before COPY src/ . reduced our CI build times on Jenkins from 8 minutes to under 2 minutes. 4. Create too many layers Chain related RUN commands using && to create a single layer. Instead of three separate RUN apt-get commands, we consolidated them, saving about 30MB per layer in our Prometheus exporter image. 5. Use bloated base images Stop using ubuntu or python:latest for everything. Switching our core Golang services from golang:latest to gcr.io/static-debian11 reduced the final image size to just 15MB and massively shrank the vulnerability surface. After 500 deployments, small images are what separate stable Kubernetes clusters from constantly crashing ones. What's the single biggest Dockerfile optimization that cut down your image size? What did I miss? Save this for your next container optimization review. Link to the gcr.io/static-debian11: https://lnkd.in/gEXqAfDT #DevOps #Docker #Kubernetes #PlatformEngineering
To view or add a comment, sign in
-
-
🚀 Day 14 of My DevOps Practising Journey “𝐃𝐨𝐜𝐤𝐞𝐫 𝐕𝐨𝐥𝐮𝐦𝐞𝐬 — 𝐭𝐡𝐞 𝐬𝐢𝐦𝐩𝐥𝐞𝐬𝐭 𝐰𝐚𝐲 𝐭𝐨 𝐤𝐞𝐞𝐩 𝐲𝐨𝐮𝐫 𝐝𝐚𝐭𝐚 𝐬𝐚𝐟𝐞.” Today I learned the basics of Docker Volumes and used them to run a Login UI inside an Nginx container. Here are the exact commands I practiced 👇 ⸻ 📦 Basic Volume Commands (Super Simple) 🔹 Create a Volume [docker volume create login_volume] 🔹 List All Volumes [docker volume ls] 🔹 Inspect a Volume [docker volume inspect login_volume] 🔹 Remove a Volume [docker volume rm volume_name] 🔹 Where Docker Stores Volume Data [/var/lib/docker/volumes/] ______ 🧪 Running the Login App Using a Volume 1️⃣ Start Nginx with Volume Attached This mounts the volume to Nginx’s web root: [docker container run -dt --name login-app -p 8081:80 \ -v login_volume:/usr/share/nginx/html nginx] 2️⃣ Go Inside the Container [docker container exec -it login-app bash] 3️⃣ Clear Old Files [rm -r *] 4️⃣ Clone the Login App into the Volume [git clone https://lnkd.in/gCzsyW8U /usr/share/nginx/html] 5️⃣ Exit & Open Browser Access your login UI at: [http://<public-ip>:8081] 💬 Takeaway “Containers are temporary — volumes make your data permanent.” #DevOps #Docker #DockerVolumes #DataPersistence #Containers #Linux #Nginx #CloudComputing #LearningInPublic #TechJourney #SoftwareEngineering
To view or add a comment, sign in
-
-
#30DaysOfContainers — Day 3/30 𝗪𝗵𝗮𝘁 𝗥𝗲𝗮𝗹𝗹𝘆 𝗛𝗮𝗽𝗽𝗲𝗻𝘀 𝗪𝗵𝗲𝗻 𝗬𝗼𝘂 𝗧𝘆𝗽𝗲 𝗱𝗼𝗰𝗸𝗲𝗿 𝗿𝘂𝗻? We’ve all done it — You install Docker. You run: 𝘥𝘰𝘤𝘬𝘦𝘳 𝘳𝘶𝘯 𝘩𝘦𝘭𝘭𝘰–𝘸𝘰𝘳𝘭𝘥 And boom — it prints “Hello from Docker!” …but have you ever wondered what actually happened behind the scenes? When you type docker run, a lot happens silently under the hood 👇 • Docker checks if the image exists locally. If not found → it pulls it from Docker Hub.(Just like how you clone code from GitHub) • Docker creates a new container from that image. It creates a lightweight isolated environment. Your container gets its own: 1. File system 2. Network stack 3. Process space 4. Runtime • The process defined in the image starts executing. For example, in hello-world, it just prints a message and exits. Docker containers don’t have a full operating system. They share your system’s kernel — that’s why they start in milliseconds, not minutes like virtual machines. #Docker #DevOps #Containers #SoftwareEngineering
To view or add a comment, sign in
-
A quick Docker tip that can speed up your builds a lot. Many people write their Dockerfile like ❌ Slow version COPY . . RUN npm install If you change even one small file, Docker throws away the cache and installs everything again. That’s why your builds feel slow. ✅ Faster version (use layer caching properly) COPY package*.json ./ RUN npm ci COPY . . RUN npm run build Here, Docker installs dependencies only when package.json changes. Normal code changes won’t trigger a full npm install. Why this works Docker builds images step by step. Each step becomes a layer. If the input of a step doesn’t change, Docker reuses that layer. This is what makes builds fast- layer caching. Just reorder your Dockerfile and you’ll see the speed boost immediately. #docker #devops
To view or add a comment, sign in
-
Stop writing Kubernetes YAML from scratch. One of the most tedious parts of working with Kubernetes is creating manifest files. We often find ourselves hunting through old projects or the official documentation just to get a basic template for a Deployment or a Service. There's a much faster way. You can generate the YAML for most core Kubernetes resources directly from the command line using the `--dry-run` flag. This flag tells kubectl to output the resource definition without actually sending it to the API server. Here’s the magic command: `kubectl create deployment my-app --image=nginx --dry-run=client -o yaml` Let’s break it down: - `kubectl create deployment...`: The standard command to create a deployment. - `--dry-run=client`: This is the key. It instructs kubectl to process the command but not actually create the resources in the cluster. - `-o yaml`: This specifies that the output should be in YAML format. The result? A perfectly valid Deployment manifest printed directly to your terminal. You can immediately redirect it to a file and start customizing: `kubectl create deployment my-app --image=nginx --dry-run=client -o yaml > my-app-deployment.yaml` This trick works for Services, ConfigMaps, Secrets, and more. It saves a huge amount of time, eliminates copy-paste errors, and ensures you're starting with the correct structure every time. What's your favorite kubectl time-saver? #Kubernetes #DevOps #K8s #kubectl #CloudNative #OpenSource #CNCF
To view or add a comment, sign in
-
-
🐳 Master Docker Like a Pro — My Go-To Commands 💻✨ Whether you’re just starting out or already building in containers daily, having the right Docker commands at your fingertips can save time, boost productivity, and keep your environment clean. Here’s a quick cheat sheet you can bookmark or share with your team 👇 🚀 1. Basic Commands docker --version # Check Docker version docker info # Show system-wide info docker login # Log in to registry 📦 2. Working with Images docker pull nginx docker images docker rmi nginx docker build -t myapp:latest . docker tag myapp:latest myrepo/myapp:v1 docker push myrepo/myapp:v1 🧰 3. Working with Containers docker run -d -p 8080:80 nginx docker ps docker ps -a docker stop <container_id> docker start <container_id> docker restart <container_id> docker rm <container_id> 🔍 4. Inspecting & Debugging docker logs <container_id> docker exec -it <container_id> bash docker inspect <container_id> docker stats 🧼 5. Clean Up docker system prune docker image prune docker container prune docker volume prune 🐙 6. Docker Compose (Bonus) docker compose up -d docker compose down docker compose ps docker compose logs -f 💡 Pro Tip: Use docker system df to quickly check how much disk space your Docker resources are using. ✅ Why keep these handy? • Speed up your dev workflow 🏃 • Debug issues faster 🕵️ • Keep your environment clean 🧼 • Simplify CI/CD pipelines ⚙️ #Docker #DevOps #Containers #Cloud #SoftwareEngineering #CheatSheet #Productivity #DockerTips #TechCommunity
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development