🚀 Built & Deployed Multi-Container Applications on Azure using Terraform & Docker I recently worked on a hands-on DevOps project where I implemented an end-to-end workflow — from infrastructure provisioning to application deployment. 🔧 What I did: Provisioned an Ubuntu Virtual Machine on Azure using Terraform (for_each + reusable modules) Connected to the VM using VS Code Remote SSH Installed and configured Docker Cloned application source code (StreamFlix & Starbucks UI clones) Pulled Nginx image from Docker Hub Deployed multiple containers using volume mapping 🌐 Key Implementation: Hosted StreamFlix clone on one container Hosted Starbucks clone on another container Used Docker volume mapping to serve custom HTML content Exposed applications via different ports 📁 Architecture Overview: Azure VM → Docker Engine → Multiple Containers → Nginx → Custom Web Apps 💡 What I learned: Real-world use of Terraform modules and for_each Containerization and isolation using Docker Volume mapping (host → container) Running and managing multiple containers on a single VM Basic container networking concepts 📸 Attaching screenshots of the setup and running applications 👇 #DevOps #Terraform #Docker #Azure #CloudComputing #Containerization #Linux #VSCode #LearningInPublic #IaC #DockerContainers
Deploying Multi-Container Apps on Azure with Terraform & Docker
More Relevant Posts
-
#100daysofdevopschallenge #Day32 What is Docker? Docker is a containerization platform that helps developers build, package, and run applications along with all required dependencies (libraries, settings, tools) inside a lightweight unit called a container. Simple Example: If your application works on your laptop, Docker ensures it works the same on: Testing server Production server Cloud Any developer machine Why Do We Use Docker? 1. Consistency: Docker provides the same environment everywhere. Without Docker: App may fail due to missing libraries Different OS issues Version mismatch With Docker: Same app, Same dependencies, Same behavior 2. Portability: Docker containers can run on: Windows, Linux, Mac, AWS, Azure, Google Cloud Build once → Run anywhere 3. Lightweight: Unlike virtual machines, Docker shares the host OS kernel. Result: Fast startup, Less memory usage, Better performance 4. Isolation: Each container runs independently. Benefits: No conflicts between applications Better security Easy troubleshooting 5. Scalability: Docker makes scaling applications easy. Example: If traffic increases: Run multiple containers Load balance them Docker Architecture: Main Components: 1. Docker Client: This is where users type commands. Commands: docker build docker pull docker run docker ps 2. Docker Host / Docker Daemon: The main engine that: Builds images Runs containers Manages networks Handles storage 3. Docker Images: A read-only template containing: Application code, Libraries, Dependencies, Configurations Example: Ubuntu image, Nginx image, Redis image 4. Docker Containers: Running instances of Docker images. 5. Docker Registry: A storage location for Docker images. Common Registry: Docker Hub Actions: Push images Pull images Docker Workflow: Step 1: Create a Dockerfile Step 2: Build an image Step 3: Run container Step 4: Push to Docker Hub Docker packages your application with everything it needs and runs it anywhere reliably. Dockerfile → Image → Container → Registry Key Benefits of Docker: Faster deployment Easy CI/CD integration Environment consistency Better resource utilization Simplified application management Supports microservices architecture #devops #Docker #dockerarchitecture Frontlines EduTech (FLM)
To view or add a comment, sign in
-
-
🚀 Just built and deployed a full CI/CD Pipeline from scratch! Here's what I implemented: ✅ Launched AWS EC2 (Ubuntu) and configured Security Groups ✅ Installed and configured Jenkins for automated builds ✅ Containerized a Node.js app using Docker ✅ Connected GitHub Webhooks — every git push triggers an automatic build & deploy ✅ Built a 4-stage Jenkinsfile: Checkout → Build → Deploy → Health Check The best part? One git push is all it takes to go from code to live app — automatically. That's the power of DevOps! 💡 This project gave me hands-on experience with real-world tools used in production environments every day. GitHub: github.com/arunak-11 #DevOps #AWS #Jenkins #Docker #CICD #CloudComputing #Linux #GitHub #LearningByDoing
To view or add a comment, sign in
-
🚀 Day 6 of 14 days Docker Journey | Docker Networking (DevOps Series) 🔥 Continuing my 14-Day Docker Series, today I explored one of the most powerful concepts in containerization: 👉 Docker Networking 🧠 The Problem I Understood In real-world applications, we don’t run just one container… We have: Frontend Backend Database 💥 Question: How do these containers communicate with each other? 💡 The Solution: Docker Networks 👉 Docker allows containers to communicate using networks + internal DNS ✔ No need to remember IP addresses ✔ Just use container names 🛠️ Hands-on I Performed ✔ Created my own custom network: docker network create mynet ✔ Ran multiple containers in same network ✔ Connected containers using names (not IPs) ✔ Tested communication: ping mongodb 💥 Successfully connected one container to another 🔥 🧠 Extra Learning (Self Exploration) Went deeper into: ✔ Types of Docker networks (bridge, host, none, overlay, macvlan) ✔ Difference between default vs custom bridge ✔ Internal vs external communication 🎯 Real DevOps Insight 👉 Docker Networking is the foundation of: Microservices architecture Multi-container applications Scalable systems 💬 If you're on a DevOps journey, let’s connect and grow together! #Docker #DevOps #LearningInPublic #CloudComputing #AWS #Networking #Linux #Containers #TechJourney #BuildInPublic
To view or add a comment, sign in
-
-
🚨 Kubernetes Core Architecture — If You Don’t Get This, You’re Guessing 🚨 Most people say they “know” Kubernetes… but all they really do is run kubectl commands. That’s not understanding — that’s memorizing shortcuts. If you don’t understand what’s happening behind the scenes, you’re just hoping things work. Here’s the ONE mental model you actually need 👇 🧠 Kubernetes = Brain vs Muscle 🔥 Control Plane (The Brain) This is where all decisions are made: • API Server → the front door (everything goes through this) • Scheduler → decides which node runs your Pod • Controller Manager → keeps fixing things until desired = actual • etcd → stores the entire cluster state (your source of truth) 👉 If this goes down, your cluster is basically dead. ⚙️ Worker Nodes (The Muscle) This is where your applications actually run: • Kubelet → connects node to control plane • Container Runtime → runs containers (containerd/Docker) • Pods → smallest unit where your app lives 👉 If these fail, apps crash — but cluster still exists. 🌐 Networking (The Part Everyone Ignores… Until It Breaks) • Pods communicate over cluster network • Services expose Pods (internally + externally) • DNS makes everything discoverable 👉 If you don’t get this, debugging will destroy you. ⚠️ Reality Check If you can’t: • Explain how a Pod is scheduled • Trace request → Service → Pod • Tell what happens when a node dies Then you don’t understand Kubernetes. You’re just using it blindly. 💡 What Actually Matters (Focus Here) 1. Pod lifecycle 2. Scheduling flow 3. Service routing 4. Node communication 5. Failure handling 🧩 Mental Model Kubernetes is just a “Desired State Engine” You say: “I want 3 Pods running” Kubernetes says: “Done. And I’ll keep fixing it if anything breaks.” #kubernetes #devops #cloudcomputing #k8s #docker #container #backenddeveloper #softwareengineering #linux #cloudnative #aws #azure #gcp #microservices #programming #techcontent
To view or add a comment, sign in
-
-
🚨 CI Build Success… But Snyk Scan Failed? Faced an interesting issue today in Azure DevOps 👇 ✔️ Docker image was built successfully ❌ But Snyk scan failed with: "SNYK-CLI-0000: Image does not exist for the current platform" At first, it looked like the image wasn’t available… but that wasn’t the real problem. --- 💡 Root Cause: 👉 Platform mismatch (amd64 vs arm64) The image existed, but Snyk couldn’t resolve it for the current platform. --- ✅ Fix: docker build --platform=linux/amd64 -t <image> . And in pipeline: env: DOCKER_DEFAULT_PLATFORM: linux/amd64 --- 🎯 Key Takeaway: Before debugging CI failures, always check: - Platform compatibility - Image tag correctness - Registry availability --- 💭 Small issues like this can consume hours if you don’t spot the pattern early. Sharing this so it saves someone else’s time 🙌 #DevOps #AzureDevOps #Docker #Snyk #CICD #Debugging #LearningInPublic
To view or add a comment, sign in
-
🗓️ Day 48/100 — 100 Days of AWS & DevOps Challenge First Kubernetes task. First kubectl command. First Pod. pod-nginx.yml: apiVersion: v1 kind: Pod metadata: name: pod-nginx labels: app: nginx_app spec: containers: - name: nginx-container image: nginx:latest $ kubectl apply -f pod-nginx.yaml $ kubectl get pod pod-nginx # pod-nginx 1/1 Running 0 30s ✅ After 47 days of Linux, Git, and Docker — this is the moment the stack shifts. From "run a container on this server" to "tell the cluster what I want, let Kubernetes decide where and how." Three things worth understanding from Day 1 of K8s: The YAML structure is intentional Every Kubernetes manifest has the same four top-level fields: apiVersion, kind, metadata, spec. Knowing this pattern means you can read any Kubernetes resource — Pod, Deployment, Service, ConfigMap — without memorizing each one separately. The structure is the same. Only kind and spec change. Labels are not just documentation: app: nginx_app looks like a tag. It's actually the glue that holds Kubernetes together. Services use label selectors to route traffic to Pods. Deployments use them to track which Pods they own. kubectl get pods -l app=nginx_app filters by label. Without correct labels, nothing connects to anything. Labels are operational infrastructure. kubectl describe pod is your best debugging tool. kubectl get pod shows status. kubectl describe pod shows the Events section — the full timeline of what Kubernetes did. When a Pod is stuck in Pending or ContainerCreating, the Events section tells you exactly why: image pull failed, insufficient resources, node selector mismatch. Always describe before guessing. 48 days. Cloud-native infrastructure begins. ☸️ Full K8s concepts + Q&A on GitHub 👇 https://lnkd.in/gTZ-hWAf #DevOps #Kubernetes #K8s #Containers #CloudNative #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #AWS #kubectl #CNCF
To view or add a comment, sign in
-
🚀 Day 9 – Terraform Workspaces (#15DaysOfTerraform) 🤯 Managing multiple environments… Dev, Test, Prod using same code? Editing values again and again? ❌ Let’s solve this with Workspaces ⚙️🔥 📘 What are Workspaces? Workspaces allow you to manage multiple environments using same Terraform code Think of it like: 🧩 Separate environments 🔁 Same code, different configs 📂 Isolated state files 🛠 Basic Commands Bash terraform workspace list terraform workspace new dev terraform workspace select dev 🧩 How It Works 👉 Each workspace has its own state file 👉 Same code → Different environments Example: dev staging prod 📌 Workspaces vs Separate Backends 👉 Workspaces Same backend, different state files Best for simple envs (dev/stage/prod) 👉 Separate Backends Completely isolated state Better for security & enterprise use cases 💡 Why This Matters? ✅ No duplicate code ✅ Easy environment management ✅ Clean & organized infrastructure 🎯 Key Takeaway Don’t create separate code for each env ❌ Use Workspaces for environment isolation ✅ Next 👉 Day 10 – Terraform Provisioners 🔥 #Terraform #15DaysOfTerraform #Cloud #LearningInPublic 🚀#Docker #DevOps #25DaysOfDockerb #Volumes #Containerization #CloudComputing #Containers #Microservices #Ansble #VirtualMachines #SoftwareEngineering #TechLearning #CloudNative #SRE #DevOpsTools #ITInfrastructure #Pods #DeveloperTools #Automation #Grafana #Kubernetes #CloudDeployment #K8S #TechnologyTrends #Prometheus #DigitalTransformation #Linux #Maven #Programming #InfrastructureAsCode #flm #frontlinemedia #frontlinesedutech
To view or add a comment, sign in
-
-
Day 69 (Part 3) of My DevOps Journey 🚀 Today I worked on deploying a real-world static website using Docker and AWS EC2 🔥 Here’s what I did step by step: ✅ Downloaded a website template from an online source ✅ Transferred files from my local system to EC2 using SCP ✅ Extracted (unzipped) the project files on the server ✅ Organized project structure (HTML, CSS, JS) ✅ Created an Nginx Docker container ✅ Copied website files into /usr/share/nginx/html/ inside the container ✅ Successfully ran the website using Docker ✅ Finally pushed my Docker image to my Docker Hub profile 💡 Key Learning: I understood how to deploy a static website inside a container and make it accessible using Nginx. This also strengthened my understanding of: File transfer (SCP) Linux file management Docker image creation & containerization Real-world deployment workflow Part 2 included pushing my image to Docker Hub 🚀 Next Step → Automating this whole process using CI/CD 🔥 #DevOps #Docker #AWS #Nginx #CloudComputing #LearningInPublic #BeginnerToDevOps #TechJourney #SCP #Linux #DockerHub
To view or add a comment, sign in
-
🚀 Pulumi Installation & When to Use Pulumi vs Terraform vs OpenTofu Infrastructure as Code is evolving fast. While Terraform remains widely used, tools like Pulumi and OpenTofu are changing how we define and manage infrastructure. Here’s a quick guide covering installation + when to use each. 🔧 Pulumi Installation (Windows / Mac / Linux) Option 1 — Using CLI script: curl -fsSL https://get.pulumi.com | sh Option 2 — Windows (PowerShell): choco install pulumi Verify installation: pulumi version Login: pulumi login Create project: pulumi new aws-typescript Deploy: pulumi up ⚔️ Pulumi vs Terraform vs OpenTofu Pulumi • Uses real programming languages (Python, TypeScript, Go, C#, Java) • Best for developers & complex logic • Supports loops, conditions, functions easily • Strong testing & modular design • State: Pulumi Cloud / Self-managed Use Pulumi when: ✅ You want to use Python/TypeScript instead of HCL ✅ You need complex logic (loops, conditions, API calls) ✅ Dev teams own infrastructure ✅ You want better reusability & testing Terraform • Uses HCL language • Industry standard • Huge module ecosystem • Mature & stable • Large community support Use Terraform when: ✅ Enterprise standard required ✅ Team already using Terraform ✅ Need stable, predictable IaC ✅ Large module ecosystem needed OpenTofu • Terraform fork (open-source) • Fully community-driven • No licensing restrictions • Compatible with Terraform configs • Growing ecosystem Use OpenTofu when: ✅ Want Terraform-compatible but fully open source ✅ Avoid vendor/licensing changes ✅ Migrate from Terraform easily ✅ Long-term open governance required 📊 Quick Comparison Language Pulumi → Programming languages Terraform → HCL OpenTofu → HCL Best For Pulumi → Developers & complex logic Terraform → Enterprise standard OpenTofu → Open-source Terraform alternative Learning Curve Pulumi → Easy for developers Terraform → Easy for ops teams OpenTofu → Same as Terraform 🧠 When to choose what? Choose Pulumi → Dev-heavy teams, advanced automation Choose Terraform → Enterprise standard IaC Choose OpenTofu → Open-source Terraform replacement #Pulumi #Terraform #OpenTofu #IaC #DevOps #Cloud #Azure #AWS #InfrastructureAsCode #PlatformEngineering #CloudAutomation
To view or add a comment, sign in
-
I just wrapped up an Introduction to Kubernetes course from The Linux Foundation and decided to reinforce it with a hands-on project. Instead of stopping at theory, I built and deployed a simple web application end-to-end using Kubernetes. Here’s what I implemented: 🔹Containerized my app using Docker 🔹Deployed it on a local Kubernetes cluster with Minikube 🔹Created a Deployment to manage replicas and enable self-healing 🔹Exposed the application using a Service (NodePort) 🔹Externalized configuration using ConfigMaps and Secrets 🔹Implemented liveness and readiness probes for reliability 🔹Practiced scaling and rolling updates What stood out to me was how Kubernetes shifts you from manually running containers to defining a desired state and letting the system enforce it. Watching Pods automatically restart and scale based on configuration made that concept very real. Github repo: https://lnkd.in/dTPTKjSV Next, I’m continuing with the Kubernetes and Cloud Native Essentials to deepen my understanding of cloud-native systems and how modern applications are designed and operated. #Kubernetes #DevOps #CloudComputing #Docker #LearningJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development