Day 5 of #30DaysOfDevOps — Docker Basics Docker is one of the most important tools in DevOps. It ensures your app runs the same way on your laptop, in staging, and in production. No more "it works on my machine." 1. Why Docker? Docker packages your app and everything it needs into a single container that runs consistently anywhere. Containers vs VMs: - VMs include a full OS — heavy, slow to start - Containers share the host OS kernel — lightweight, start in seconds 2. Core Concepts Image — read-only template with your app and dependencies Container — a running instance of an image Dockerfile — instructions to build an image Docker Hub — public registry to store and share images 3. Essential Commands Run a container: docker run -d -p 8080:80 nginx List running containers: docker ps Stop and remove: docker stop 3f2a1b docker rm 3f2a1b Shell into a running container: docker exec -it 3f2a1b bash 4. Writing a Dockerfile FROM node:20-alpine WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . EXPOSE 4000 CMD ["node", "server.js"] Build and run: docker build -t my-app:v1.0 . docker run -d -p 4000:4000 my-app:v1.0 5. Push to Docker Hub docker tag my-app:v1.0 yourname/my-app:v1.0 docker login docker push yourname/my-app:v1.0 6. Optimization Tips Use alpine images — 5x smaller than full OS images Add .dockerignore to exclude node_modules and .git Copy package files before source code to maximize layer caching 7. Challenges for Today 1. Install Docker and verify with: docker run hello-world 2. Run an nginx container on port 8080 and open it in your browser. 3. Write a Dockerfile for a Python or Node.js app and build it. 4. Tag your image and push it to Docker Hub. 5. Shell into a running container and explore the filesystem. 6. Add a .dockerignore and observe the build context size difference. Drop your Docker Hub image link in the comments. #DevOps #Docker #Containers #Dockerfile #30DaysOfDevOps #LearningInPublic #DevOpsEngineer #CloudComputing
Docker Basics for DevOps
More Relevant Posts
-
🐳 Top Docker Commands Every Developer Should Know If you're working with Docker, mastering a few core commands can make your workflow faster, cleaner, and more efficient. Here are some essential Docker commands every developer should know: 🔹 1. Check Docker Version docker --version 🔹 2. Pull an Image from Docker Hub docker pull nginx 🔹 3. List Images docker images 🔹 4. Run a Container docker run -d -p 3000:3000 node-app 🔹 5. List Running Containers docker ps 🔹 6. List All Containers (including stopped) docker ps -a 🔹 7. Stop a Container docker stop <container_id> 🔹 8. Remove a Container docker rm <container_id> 🔹 9. Remove an Image docker rmi <image_id> 🔹 10. View Logs docker logs <container_id> 🔹 11. Execute Command Inside Container docker exec -it <container_id> bash 🔹 12. Build an Image docker build -t my-app . 🔹 13. Docker Compose Up docker-compose up -d 🔹 14. Docker Compose Down docker-compose down 💡 Pro Tip You don’t need to memorize everything — but knowing these commands can cover 80% of real-world Docker use cases. Mastering Docker CLI is a big step toward becoming a DevOps-ready developer 🚀 #Docker #DevOps #Containerization #WebDevelopment #CloudComputing #CICD #SoftwareEngineering #BackendDevelopment #TechSkills #Programming
To view or add a comment, sign in
-
-
📝 Claude Code Source Leaked via npm Source Maps: Lessons for Every DevOps Team Anthropic accidentally shipped source maps in their npm package, exposing 512,000 lines of Claude Code source. Here is what went wrong and how to prevent it in your own CI/CD pipeline. Read it here: https://lnkd.in/dUB_8YCy #DevOps #DevOps #Learning
To view or add a comment, sign in
-
Dockerfiles (The Ultimate To-Do List) So far, we’ve talked about the "Shipping Container" (Docker) and the "Recipe" (The Image). But how do we actually write that recipe ? Enter the Dockerfile. 📄 If you’re a DevOps pro, you know that manual setups are the enemy. If you have to tell a teammate, "First install this, then click that, then change this setting," something will go wrong. The Analogy: The Ultimate To-Do List 📋 Think of a Dockerfile as a very strict, step-by-step "To-Do List" for a computer. Imagine you are hiring a chef to bake that cake we talked about yesterday. Instead of standing over them, you leave a note on the counter: FROM: Start with this specific brand of flour (The Base OS). COPY: Take these eggs from the fridge and put them on the table (Your Code). RUN: Mix the ingredients for 5 minutes (Installing Dependencies). CMD: Turn on the oven and start baking (Starting the App). Why is this a game-changer ? No Human Error: The computer follows the list exactly the same way, every single time. Version Control: Since it’s just a text file, I can put it in GitHub. If the app breaks, I can look at the "list" and see exactly what changed. Speed: I can hand this list to a cloud server, and it can "build" my environment in seconds. In DevOps, we don't build servers; we write the instructions so the servers can build themselves. 🛠️ If you’re learning Docker, what was the first "instruction" that tripped you up? For me, it was definitely understanding the difference between RUN and CMD! I’ve actually put together a deep-dive guide on my GitHub! 🚀 I wanted to go beyond the basics, so I’ve documented: ✅ The core instructions (FROM, COPY, RUN, CMD) and more ✅ Common commands you'll use every day 👉 https://lnkd.in/gapUnQhU #DevOps #Docker #InfrastructureAsCode #TechCommunity #LearningInPublic #CloudNative #Automation
To view or add a comment, sign in
-
-
Building CI/CD with GitHub Actions: Why It’s a Game Changer 🚀 Stop me if this sounds familiar: You spend hours coding a new feature, push it to production, and—boom—everything breaks because of a small environment mismatch or a forgotten test. We’ve all been there. That’s why a solid CI/CD pipeline isn't just a "nice-to-have" for DevOps enthusiasts anymore; it’s the backbone of professional software engineering. Lately, I've been leaning heavily into GitHub Actions, and here’s why it’s winning: Why GitHub Actions? 🛠️ • Native Integration: No need to manage external servers or third-party plugins. Your automation lives exactly where your code does. • YAML-Based Workflow: Defining a pipeline is as simple as writing a .yml file. It’s version-controlled, readable, and easy to tweak. • The Marketplace: From deploying to AWS/Azure to running specialized security scans, someone has likely already built an "Action" for it. The Real Value 💎 It’s not just about "deploying fast." It’s about: 1. Confidence: Running your test suites on every pull request means catching bugs before they hit the main branch. 2. Consistency: Eliminating "it works on my machine" syndrome. 3. Speed: Automating the repetitive stuff so we can focus on building what actually matters. Whether you're working on a small React project or a massive Java microservice architecture, automating your workflow is the best gift you can give your future self. What’s your go-to tool for CI/CD? Are you Team GitHub Actions, or do you prefer Jenkins/GitLab? Let’s chat in the comments!👇 #SoftwareEngineering #DevOps #GitHubActions #CICD #Automation #WebDevelopment #CodingLife
To view or add a comment, sign in
-
-
🚀 Just finished the Docker course on Boot.dev! 🚀 I’m excited to share that I’ve learned the fundamentals of Docker—a key technology in modern DevOps and CI/CD pipelines. Docker makes it simple and fast to deploy new versions of code by packaging applications and their dependencies into preconfigured environments. This not only speeds up deployment, but also reduces overhead and eliminates the “it works on my machine” problem. Docker is a core part of the CI/CD (Continuous Integration/Continuous Deployment) process, enabling teams to deliver software quickly and reliably. Here’s a high-level overview of a typical CI/CD deployment process: The Deployment Process: 1. The developer (you) writes some new code 2. The developer commits the code to Git 3. The developer pushes a new branch to GitHub 4. The developer opens a pull request to the main branch 5. A teammate reviews the PR and approves it (if it looks good) 6. The developer merges the pull request 7. Upon merging, an automated script, perhaps a GitHub action, is started 8. The script builds the code (if it's a compiled language) 9. The script builds a new docker image with the latest program 10. The script pushes the new image to Docker Hub 11. The server that runs the containers, perhaps a Kubernetes cluster, is told there is a new version 12. The k8s cluster pulls down the latest image 13. The k8s cluster shuts down old containers as it spins up new containers of the latest image This process ensures that new features and fixes can be delivered to users quickly, safely, and consistently. image credit: Boot.dev Docker course #docker #cicd #devops #softwaredevelopment #bootdev #learning
To view or add a comment, sign in
-
-
GitHub Actions for CI/CD: Build, Test, and Deploy 🚀 Key takeaways👇 ⚙️ CI/CD fundamentals → automating integration, testing, delivery, and deployment workflows 📄 Writing workflows using YAML (.github/workflows) triggered by push & pull requests 🧩 Understanding workflows → jobs → steps → runners (hosted & self-hosted) 🔁 Using reusable actions like actions/checkout, setup-python, setup-node, setup-go 🧪 Implementing CI for multiple stacks → JavaScript (Node.js), Python (Django), Go 📊 Matrix strategy to test across multiple versions (Node.js, Python 3.11–3.14, etc.) 🔍 Code quality tools → Flake8, PyTest, revive (Go linter) 🐞 Debugging pipelines using logs, fixing dependency issues (like numpy) 📦 Managing artifacts and publishing packages (Maven, NPM, Docker via GitHub Packages) 🐳 Building & publishing Docker container images with workflow dependencies (needs, workflow_call) 🔐 Secure credential handling using secrets & environment variables ☁️ Cloud integrations → AWS deployments, service accounts, CloudFormation 🌐 Deploying static sites using GitHub Pages (Hugo, Jekyll, Gatsby) 🏗️ Infrastructure as Code with Terraform + workflow summaries for better visibility 🔄 Structuring pipelines with job dependencies (needs) for proper execution flow 🚦 Environment-based deployments (staging, production) with protection rules & approvals ⏸️ Manual approvals for production deployments to ensure safe releases ♻️ Scalable and reusable workflows for real-world CI/CD systems #GitHubActions #DevOps #CICD #Automation #Docker #AWS #Terraform #LearningJourney
To view or add a comment, sign in
-
🗓️ Day 28/100 — 100 Days of AWS & DevOps Challenge Today's task: a developer has in-progress work on a feature branch but one specific commit is ready and needs to go to master right now, without dragging the rest of the unfinished work along. This is exactly what git cherry-pick is for. # Find the commit hash on the feature branch $ git log feature --oneline # abc5678 Update info.txt ← this one # Switch to master and cherry-pick it $ git checkout master $ git cherry-pick abc5678 # Push $ git push origin master One commit. Surgically applied. Feature branch untouched. 1. Why not just merge the feature branch? - The feature branch has in-progress commits code that isn't tested, isn't ready, and would break things on master. git merge feature brings ALL of it over. Cherry-pick takes only what's ready. 2. When this pattern matters in production: - A critical bug fix lands on a development branch. You can't merge the whole branch, there are half-finished features alongside the fix. You cherry-pick the fix onto master and onto any active release branches. This is how security patches get backported across multiple versions in open source projects. Same concept, same tool. The command to find a commit by message when you don't have the hash handy: $ git log --all --oneline --grep="Update info.txt" Saves time when the branch has many commits and you're looking for one specific one. Full breakdown on GitHub 👇 https://lnkd.in/gVHV9qPc #DevOps #Git #VersionControl #CherryPick #GitOps #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #CICD #Hotfix
To view or add a comment, sign in
-
🚀 Most Developers Use Docker Daily… But mastering the right commands makes everything faster and easier. Here’s a Docker Cheat Sheet I wish I had earlier 👇 📦 Basic docker run → Run a container docker ps → List containers docker stop → Stop container docker rm → Remove container 🧱 Images docker pull → Download image docker build -t app . → Build image docker images → List images docker rmi → Remove image ⚙️ Debugging (Most Important) docker exec -it <id> /bin/bash → Enter container docker logs <id> → View logs docker inspect <id> → Full details docker stats → Resource usage 🌐 Networking & Volumes docker network ls → List networks docker volume ls → List volumes docker network create → Create network 💡 Real DevOps Insight: Docker is easy to start. But understanding: • container lifecycle • networking • resource limits • failure behavior That’s what levels you up. If you found this useful, save it. Follow me for more insights on DevOps 🚀 #Docker #DevOps #CloudNative #SRE #SoftwareEngineering #TechTips #CloudEngineering
To view or add a comment, sign in
-
-
🚀 Day 2/5 of learning Docker Advanced I used to think a Dockerfile is just a set of instructions… 👉 But it’s actually a layered build system with caching And this changed how I approach builds completely. ⸻ 🧱 What happens during docker build? Each instruction: ✔️ Creates a new layer ✔️ Gets cached (if unchanged) So Docker doesn’t rebuild everything every time. ⸻ ❌ Mistake I used to make: COPY . . RUN npm install 👉 Any small code change = dependencies reinstall again. Better Approach: COPY package.json . RUN npm install COPY . . ✔️ Dependency layer gets cached ✔️ Faster rebuilds ✔️ Efficient CI/CD pipelines ⸻ 💡 Key realization: Docker build performance depends on layer ordering 👉 Order your Dockerfile like: 1️⃣ Base image 2️⃣ System dependencies 3️⃣ App dependencies 4️⃣ Application code (last) ⸻ 🔥 Small changes, big impact: ✔️ Use .dockerignore ✔️ Combine RUN commands ✔️ Avoid unnecessary packages ✔️ Choose lightweight base images ⸻ Now I don’t just write Dockerfiles 👉 I design them for performance Because: Slow builds = slow pipelines = slow teams ⸻ #Docker #DevOps #CI #Containers #LearningInPublic
To view or add a comment, sign in
-
-
Most CI/CD pipelines fail for the same reason — no clear stages. After 4 years in DevOps, here's the multi-stage GitHub Actions pipeline I recommend to every engineer on my team: ━━━━━━━━━━━━━━━━━━━ Stage 1 → Test Stage 2 → Build & tag Docker image Stage 3 → Deploy to Staging Stage 4 → Deploy to Production (with manual approval) ━━━━━━━━━━━━━━━━━━━ 3 things that make this bulletproof: 1️⃣ Use needs: to chain jobs — if tests fail, nothing else runs 2️⃣ Tag images with github.sha — every build is fully traceable 3️⃣ Use GitHub Environments for prod — enforces human approval before anything goes live You don't need a complex tool to do this. A single YAML file in .github/workflows/ is enough to build a production-grade pipeline. Save this post for when you set yours up. What does your CI/CD stack look like? Drop it in the comments 👇 #DevOps #GitHubActions #CICD #Docker #Kubernetes #CloudNative #DevOpsEngineer #SoftwareEngineering
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development