Day 4/100: Installed the Architect’s Toolkit 🛠️ You can't build a high-speed pipeline with a manual setup. Today was all about Environment as Code. Instead of manually downloading installers, I used Chocolatey to automate my entire workstation setup. One command, and my environment was ready to go. The DevOps Starter Pack I configured today: 🏗️ Virtualization: VirtualBox & Vagrant (My local sandbox for testing). 📦 Build & CLI: Maven & AWS CLI (Connecting local code to the Cloud). ☕ Runtime: Amazon Corretto 17 (OpenJDK). 📝 IDEs: IntelliJ IDEA, VS Code, & Sublime Text 3. 🌿 Version Control: Git. The 'Aha!' Moment: Using Vagrant means I no longer have to worry about "cluttering" my main OS. I can spin up a clean Linux server in seconds, test my scripts, and destroy it when I'm done. Now that the lab is ready, it’s time to dive into the heart of the OS. Up Next: Sign up some accounts such AWS #100DaysOfDevOps #100DaysOfDevOpsChallenge #DevOps #Automation #Chocolatey #Vagrant #InfrastructureAsCode #LearningInPublic
Automating DevOps with Chocolatey & Vagrant
More Relevant Posts
-
The daily productivity tip for all the people doing any kind of infra / devops work ... or just heavily relying on agentic tools to write all sorts of multiline commands for any CLIs (be it aws, gcloud, kubectl, helm or whatever) and using Windows as their daily driver: It takes just a few lines to fix the "everyone uses bash, so \ line-end escapes are the only thing we can think of" problem. Link to gist in comments. If you answered yes, but no - you really should try PowerShell - the "everything is a PSObject" mindset is quite literally at least 20+ years ahead "strings and exit codes should be enough for everybody" AND the windows ecosystem has already copied all the "nice things" from OSS: winget is like apt + conda on steroids, PSGallery does the same thing for your shell scripts as pypi and npm do for other code, but now I digress...
To view or add a comment, sign in
-
-
🚨 CI Build Success… But Snyk Scan Failed? Faced an interesting issue today in Azure DevOps 👇 ✔️ Docker image was built successfully ❌ But Snyk scan failed with: "SNYK-CLI-0000: Image does not exist for the current platform" At first, it looked like the image wasn’t available… but that wasn’t the real problem. --- 💡 Root Cause: 👉 Platform mismatch (amd64 vs arm64) The image existed, but Snyk couldn’t resolve it for the current platform. --- ✅ Fix: docker build --platform=linux/amd64 -t <image> . And in pipeline: env: DOCKER_DEFAULT_PLATFORM: linux/amd64 --- 🎯 Key Takeaway: Before debugging CI failures, always check: - Platform compatibility - Image tag correctness - Registry availability --- 💭 Small issues like this can consume hours if you don’t spot the pattern early. Sharing this so it saves someone else’s time 🙌 #DevOps #AzureDevOps #Docker #Snyk #CICD #Debugging #LearningInPublic
To view or add a comment, sign in
-
🚀 From 1.5 GB → 50 MB Docker Image (95% Reduction) 🐳 I recently reduced my Docker image size from 1.5 GB to just 50 MB — that’s a 95% improvement. And honestly? This wasn’t about advanced tricks… it was about doing the basics consistently. ⚠️ Why this matters: Oversized images = ❌ Slower deployments ❌ Higher storage costs ❌ Bigger attack surface 👉 Lean containers aren’t optional in DevOps — they’re a discipline. 🔧 7 Practices I Follow in Every Build: 1️⃣ Use minimal base images Alpine or slim variants cut hundreds of MB instantly. 2️⃣ Multi-stage builds = must-have Build tools stay in one stage, final image stays clean. 3️⃣ Install only what’s needed Every extra package = unnecessary risk + size. 4️⃣ Clean cache in the SAME layer Otherwise, Docker still keeps the junk. 5️⃣ Chain RUN commands Fewer layers = smaller images. 6️⃣ Use a .dockerignore file Keep out node_modules, .git, logs, env files. 7️⃣ Never run as root Simple step → big security win. #Docker #DevOps #CloudEngineering #AWS #Containers #Linux #DevOpsJourney #90DaysOfDevOps
To view or add a comment, sign in
-
-
Day 6 of my DevOps roadmap — and today's topic hit different 🕐 [Writing this at 3:47 PM IST] Cron Jobs & Scheduling on Linux Here's what I built and learned today (as of 3:36 PM IST, Apr 18 2026): → Understood the crontab -e syntax — minute, hour, day, month, weekday → Scheduled a script to run every minute and watched it log live with tail -f → Faked per-second logging using a for loop + sleep inside a cron job → Built a real Daily System Health Reporter that: • Logs disk, memory, uptime & top 3 CPU processes • Auto-creates a dated log file (health-2026-04-18.log) every day • Redirects all output using exec > "$log" 2>&1 → Scheduled it to run every weekday at 8 AM → Learned date +%F for clean YYYY-MM-DD formatted filenames The moment tail -f showed entries ticking every second — that's when it clicked. Cron isn't just scheduling. It's your server working for you while you sleep. #Linux #DevOps #BashScripting #CronJobs #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
🚀 Day 9 – Terraform Workspaces (#15DaysOfTerraform) 🤯 Managing multiple environments… Dev, Test, Prod using same code? Editing values again and again? ❌ Let’s solve this with Workspaces ⚙️🔥 📘 What are Workspaces? Workspaces allow you to manage multiple environments using same Terraform code Think of it like: 🧩 Separate environments 🔁 Same code, different configs 📂 Isolated state files 🛠 Basic Commands Bash terraform workspace list terraform workspace new dev terraform workspace select dev 🧩 How It Works 👉 Each workspace has its own state file 👉 Same code → Different environments Example: dev staging prod 📌 Workspaces vs Separate Backends 👉 Workspaces Same backend, different state files Best for simple envs (dev/stage/prod) 👉 Separate Backends Completely isolated state Better for security & enterprise use cases 💡 Why This Matters? ✅ No duplicate code ✅ Easy environment management ✅ Clean & organized infrastructure 🎯 Key Takeaway Don’t create separate code for each env ❌ Use Workspaces for environment isolation ✅ Next 👉 Day 10 – Terraform Provisioners 🔥 #Terraform #15DaysOfTerraform #Cloud #LearningInPublic 🚀#Docker #DevOps #25DaysOfDockerb #Volumes #Containerization #CloudComputing #Containers #Microservices #Ansble #VirtualMachines #SoftwareEngineering #TechLearning #CloudNative #SRE #DevOpsTools #ITInfrastructure #Pods #DeveloperTools #Automation #Grafana #Kubernetes #CloudDeployment #K8S #TechnologyTrends #Prometheus #DigitalTransformation #Linux #Maven #Programming #InfrastructureAsCode #flm #frontlinemedia #frontlinesedutech
To view or add a comment, sign in
-
-
🚀 Day 5 – Terraform Variables & Outputs (#15DaysOfTerraform) 🤯 Writing same values again and again? Hardcoding instance type, region, names? ❌ Let’s make Terraform dynamic & reusable ⚙️🔥 📘 What are Variables? Variables allow you to store and reuse values in Terraform. Think of it like: 📦 Reusable inputs 🔁 Dynamic configuration ⚙️ Flexible infrastructure 🧩 Example (Variable) Hcl variable "instance_type" { default = "t2.micro" } 👉 Use it in resource: Hcl instance_type = var.instance_type 📤 What are Outputs? Outputs display important values after deployment. Think of it like: 📢 Final result 🔗 Share resource info 🧩 Example (Output) Hcl output "instance_ip" { value = aws_instance.example.public_ip } 💡 Why This Matters? ✅ Reusable code ✅ Easy to manage changes ✅ Clean & scalable configs 🎯 Key Takeaway Don’t hardcode values ❌ Use Variables & Outputs for flexibility 🔥 Next 👉 Day 6 – Terraform State (tfstate) 🔥 #Terraform #15DaysOfTerraform #Cloud #LearningInPublic 🚀#Docker #DevOps #Containerization #Jenkins #CloudComputing #Containers #Pods #Microservices #VirtualMachines #SoftwareEngineering #TechLearning #CloudNative #SRE #DevOpsTools #ITInfrastructure #DeveloperTools #K8s #Automation #Kubernetes #Ansble #CloudDeployment #TechnologyTrends #DigitalTransformation #Linux #Maven #Programming #InfrastructureAsCode #flm #frontlinemedia #frontlinesedutech
To view or add a comment, sign in
-
-
🚀 Built & Deployed Multi-Container Applications on Azure using Terraform & Docker I recently worked on a hands-on DevOps project where I implemented an end-to-end workflow — from infrastructure provisioning to application deployment. 🔧 What I did: Provisioned an Ubuntu Virtual Machine on Azure using Terraform (for_each + reusable modules) Connected to the VM using VS Code Remote SSH Installed and configured Docker Cloned application source code (StreamFlix & Starbucks UI clones) Pulled Nginx image from Docker Hub Deployed multiple containers using volume mapping 🌐 Key Implementation: Hosted StreamFlix clone on one container Hosted Starbucks clone on another container Used Docker volume mapping to serve custom HTML content Exposed applications via different ports 📁 Architecture Overview: Azure VM → Docker Engine → Multiple Containers → Nginx → Custom Web Apps 💡 What I learned: Real-world use of Terraform modules and for_each Containerization and isolation using Docker Volume mapping (host → container) Running and managing multiple containers on a single VM Basic container networking concepts 📸 Attaching screenshots of the setup and running applications 👇 #DevOps #Terraform #Docker #Azure #CloudComputing #Containerization #Linux #VSCode #LearningInPublic #IaC #DockerContainers
To view or add a comment, sign in
-
-
🚀 Built a Kubernetes Cluster from Scratch & Exposed the Dashboard via Custom Domain I recently completed a full hands‑on Kubernetes lab, starting from virtual machines all the way to secure, domain‑based access to the Kubernetes Dashboard — following real‑world DevOps practices. 🧱 What I built Platform: VMware (on‑prem lab) OS: Ubuntu Server 24.04 Cluster: kubeadm (1 Control Plane + 2 Worker Nodes) Runtime: containerd Networking: Flannel CNI Ingress: NGINX Ingress Controller ⚙️ Key implementation details ✅ Prepared Linux nodes (swap off, kernel modules, sysctl) ✅ Configured containerd with systemd cgroups ✅ Bootstrapped cluster using kubeadm ✅ Enabled pod‑to‑pod networking with Flannel ✅ Secured Kubernetes Dashboard using RBAC + token‑based login 🌐 Real‑world style Dashboard access Instead of relying only on kubectl proxy, I exposed the Dashboard using: NGINX Ingress Host‑based routing Custom domain: https://lnkd.in/dhEmC3bR This mirrors how Kubernetes dashboards and services are accessed in enterprise environments. 📊 What I can do from the Dashboard Monitor nodes & workloads View pod logs and events Scale deployments Debug failures visually Observe CPU & memory usage 💡 Key takeaways Kubernetes requires layering: OS → Runtime → Cluster → Network → Access Ingress is the correct way to expose services by name Dashboard must always be protected with RBAC Hands‑on labs teach far more than theory alone 🚀 Next steps: TLS with cert‑manager, PVC/NFS storage, monitoring & RBAC hardening. #Kubernetes #DevOps #CloudNative #Linux #Ingress #RBAC #Containers #LearningByDoing
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟯𝟳 𝗼𝗳 𝗺𝘆 𝗗𝗲𝘃𝗢𝗽𝘀 𝗝𝗼𝘂𝗿𝗻𝗲𝘆 💻 Managing data inside containers — copied a secure file from host to Docker container without altering it 🔐📦 𝗧𝗮𝘀𝗸: Copy File to Docker Container 𝗪𝗵𝗮𝘁 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝗼𝗱𝗮𝘆: • How to transfer files between host and Docker container • Usage of `docker cp` command • Difference between host filesystem and container filesystem • Importance of maintaining file integrity during transfer • Real-world use case of handling secure data in containers 𝗪𝗵𝗮𝘁 𝗜 𝗯𝘂𝗶𝗹𝘁 / 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗲𝗱: • Connected to Application Server 3 (`stapp03`) • Verified running container `ubuntu_latest` • Copied encrypted file using `docker cp /tmp/nautilus.txt.gpg ubuntu_latest:/opt/` • Verified file inside container using `docker exec` • Ensured file remained unchanged during transfer 𝗖𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲𝘀: • Understanding correct syntax for `docker cp` • Ensuring correct container path • Verifying file presence inside container 𝗙𝗶𝘅 / 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴: • Learned how Docker handles file transfers • Understood how to validate operations inside containers • Gained clarity on managing sensitive data in containerized environments • Realized how simple yet powerful Docker commands can be 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: Containers aren’t just about running apps — managing data securely inside them is equally important. This felt like handling real-world secure data movement in DevOps 🚀 How do you usually handle file transfers in containers — `docker cp`, volumes, or bind mounts? #Day37 #DevOps #Docker #Containerization #Linux #Automation #CloudComputing #AWS #DevOpsJourney #LearningInPublic #100DaysOfDevOps
To view or add a comment, sign in
-
-
Before I worked my way into DevOps tools, I’ve been focusing on understanding what actually happens underneath. Deployments aren’t just a button click. - How applications are built and deployed - How they’re hosted (IIS / servers) - How APIs behave when something breaks - How databases respond under load - How logs reveal the real issue Working without a full DevOps toolchain pushed me to learn these fundamentals deeply, which allowed me to understand the core aspects. Now when I look at CI/CD or Docker, I don’t just see tools, I understand the problems they solve. Still building, step by step. #DevOps #Linux #SQL #Backend #SoftwareEngineering #Learning #TechJourney
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development