🚀 Cut my Docker image from 1.01 GB → 142 MB (85% reduction) using Multi-Stage Builds Today I finally understood something practical that instantly improved my workflow — multi-stage Docker builds. 🔴 Before: Image size: 1.01 GB Slow builds & pushes Heavy deployments 🟢 After: Image size: 142 MB Faster CI/CD 🚀 Cleaner, production-ready images 💡 What changed? Instead of shipping everything (build tools, dependencies, junk), I used: ✅ Separate build stage (with all dependencies) ✅ Minimal runtime stage (only required artifacts) 🧠 Example (Java + Spring Boot) # Stage 1: Build FROM maven:3.9.6-eclipse-temurin-17 AS builder WORKDIR /app COPY . . RUN mvn clean package -DskipTests # Stage 2: Runtime FROM eclipse-temurin:17-jdk-alpine WORKDIR /app COPY --from=builder /app/target/*.jar app.jar ENTRYPOINT ["java", "-jar", "app.jar"] 🔥 Why this matters Smaller images = faster deployments Less attack surface = better security Saves bandwidth in CI/CD pipelines Production-ready containers 🧩 ⚡ Key Learning “Don’t ship your build tools to production — ship only what you run.” Currently diving deeper into: Backend • Data Engineering • DevOps • AWS • Kubernetes If you're working on similar things or optimizing systems, let’s connect 🤝 #Docker #DevOps #Backend #Java #SpringBoot #Cloud #AWS #Kubernetes #DataEngineering #BuildInPublic #DataEngineering
Optimize Docker Image Size with Multi-Stage Builds
More Relevant Posts
-
🚀 Exciting news for developers and enterprise teams! AWS Transform is now available in Kiro and VS Code! As someone who uses Kiro daily for code assistance, architecture reviews, and rapid prototyping, this is a game-changer. Now you can kick off large-scale migrations and modernizations right from your IDE — no context switching, no manual handoffs. Here's what makes this launch powerful: 🔧 Crush tech debt at scale — Java, Python, Node.js version upgrades, AWS SDK migrations (boto2→boto3, Java SDK v1→v2, JS SDK v2→v3), and more 🔁 Run transformations across thousands of repositories at once 🌐 Seamless continuity — start a job in your IDE, track it in the web console, finish wherever it makes sense — job state and context shared across every surface 🛠️ Build your own custom transformations — define your own playbooks beyond the AWS-managed ones AWS Transform is compressing enterprise transformation timelines from years to months — and now it's available right where developers already work. If you're using Kiro or VS Code, install the AWS Transform Power (Kiro) or the AWS Transform extension (VS Code) and start transforming today! 🔗 https://lnkd.in/e8e-QRZD #AWS #AWSTransform #Kiro #VSCode #CloudMigration #Modernization #GenAI #DevTools #TechDebt #AWSome
To view or add a comment, sign in
-
Every new project or team member used to trigger the same groan-inducing ritual: manually running ten distinct commands to set up their local development environment. Clone repos, install Node.js dependencies, configure Docker Compose for PostgreSQL, Redis, Kafka, set environment variables, run migrations – it was a repetitive, error-prone gauntlet. This wasn't just tedious; it bottlenecked onboarding, introduced inconsistencies across machines, and wasted precious engineering hours. My solution was a dedicated `init.sh` bash script. Leveraging AI, I rapidly scaffolded the initial script and refined complex logic for different environment permutations. This master script now orchestrates the entire process: from checking prerequisites and cloning all necessary repositories to installing `npm` dependencies for our Next.js and Node.js services, spinning up critical backend services like PostgreSQL and Redis via Docker Compose, applying `Prisma` migrations, and even seeding local databases. What was once an hour-long, multi-step manual process is now a single `chmod +x init.sh && ./init.sh` command. We've slashed onboarding time for new engineers from half a day of tedious setup to under 15 minutes. This isn't just about saving time; it ensures consistency, reduces "it works on my machine" issues, and frees up senior engineers from basic setup support tasks. Investing in robust internal automation, even for seemingly mundane tasks like dev setup, is a force multiplier for productivity. It accelerates team velocity, improves developer experience, and allows engineers to focus on building features, not fighting environments. #ShellScripting #BashScript #Automation #DevOps #DeveloperExperience #EngineeringProductivity #TechLeadership #CTO #Founders #SoftwareDevelopment #NodeJS #Docker #DockerCompose #AWS #Backend #SystemDesign #InternalTools #AIAutomation #ProductivityHacks #EngineeringCulture #Scalability #TechStrategy #CodingBestPractices #MERNStack #NextJS
To view or add a comment, sign in
-
Tackling tech debt across hundreds of repositories can be challenging, especially when it comes to maintaining context between planning, execution, and tracking. The handoff tax between tools often disrupts momentum in large-scale modernization efforts. AWS Transform now integrates directly with Kiro and VS Code, allowing for the same agentic transformations and job states to be accessible from your IDE. https://lnkd.in/gnG3rWzX The operational advantage here is context continuity. You can initiate a Java upgrade or SDK migration in your editor, track progress in the web console, and seamlessly pick it back up anywhere. Pre-built playbooks for common patterns, such as migrating from boto2 to boto3 or from Java SDK v1 to v2, help reduce the cognitive load associated with repetitive upgrades. This enables running transformations across thousands of repositories without the need for manual orchestration. #AWS #DevOps #TechDebt #SolutionsArchitecture #DeveloperExperience
To view or add a comment, sign in
-
🚀 Milestone Unlocked: Automating REST API Development, API Code Generator is Now Live on AWS As developers, we all know the drill whenever we need a new REST API for a master table (like City or Country), we end up writing the same boilerplate code: Controllers, DTOs, Mappers, Queries, and Validations just to enable basic CRUD operations while maintaining project standards. I kept thinking: there has to be a better way to automate this. So, I built an API Code Generator 🛠️ How it works 1️⃣ Define your table schema 2️⃣ It converts the schema into a smart JSON configuration 3️⃣ Configure validations, caching, dropdowns, and more 4️⃣ Click generate — download a fully structured backend module (.zip) The Result :- This tool automatically generates 70–75% of repetitive backend code, allowing developers to: • Focus on business logic • Maintain clean architecture • Follow company coding standards • Reduce development time significantly I’ve successfully deployed the backend on AWS (first time!) and the UI is now live. Would love your feedback 👇 🔗 https://lnkd.in/deux8RAU #SoftwareDevelopment #Java #SpringBoot #AWS #Automation #Productivity #CodeGenerator #DeveloperTools #Project #New #RESTful #API #Automatically #DynamicCode #Innovation #CodeVibe
To view or add a comment, sign in
-
-
🚀 Built My Own AI Frontend Code Generation Platform — Spring Boot Backend deployed on GKE Over the past 4 months, I worked on an exciting project inspired by tools like Lovable and v0, which generate frontend applications using AI. 🎥 Full Project Walkthrough (25 min deep dive): https://lnkd.in/g4qVhEng 💡 What I built: A full-fledged AI-powered frontend generation platform that allows users to generate, edit, and instantly preview UI code in real time. 🏗️ Architecture Evolution • Started as a monolithic application • Evolved into a scalable microservices architecture ⚙️ Tech Stack & Features Backend: • Spring Boot (MVC, Security, Data JPA, Hibernate) • RESTful APIs • JWT Authentication • Stripe integration for subscription-based plans AI Capabilities: • LLM integration • Retrieval-Augmented Generation (RAG) • Tool calling • Token usage tracking Microservices: • Account Service (users & subscriptions) • Workspace Service (projects & collaboration) • Intelligence Service (AI code generation, chat, logs, events) • Discovery Service (Eureka) • API Gateway & Config Service • Common library service for shared logic ⚡ System Design Highlights • Custom reverse proxy using Node.js + Redis • Dynamic wildcard routing (*.app.domain.com) • Real-time rendering of generated frontend code in browser • Kafka for inter-service communication • Redis for caching and routing • MinIO for storing AI-generated code and assets ☁️ DevOps & Deployment • Deployed on Google Kubernetes Engine (GKE) • Dockerized microservices • Fully automated CI/CD using GitHub Actions • No manual deployments after setup 🌐 Live Project www.frontendai.in Note: New user registration is currently disabled due to API cost constraints 📂 Repositories Backend: https://lnkd.in/givfPj4N Frontend: https://lnkd.in/gWCk-x3p 📈 Key Learnings • Microservices architecture and distributed systems • AI integration in real-world applications • Kubernetes and cloud deployment • Building scalable production-grade systems 🚀 This was an intense but highly rewarding journey — from idea to production deployment. #SpringBoot #Microservices #Kubernetes #AI #LLM #RAG #SystemDesign #BackendDevelopment #DevOps #Kafka #Redis #Docker #GitHubActions #Java
To view or add a comment, sign in
-
Docker confused me for longer than I'd like to admit. Then I learned these 5 concepts and everything clicked: **1. Image** A snapshot of your application and everything it needs to run — OS, dependencies, code. Like a template. Read-only. **2. Container** A running instance of an image. Like spinning up a VM from a template, but in milliseconds and using far fewer resources. **3. Dockerfile** Instructions for building an image. "Start with Node 20, copy my code, install dependencies, set the start command." **4. Volume** Persistent storage attached to a container. Data in containers disappears when containers stop — volumes persist it. **5. Docker Compose** Defines and runs multi-container applications. Your app + database + cache — all started with one command: `docker-compose up`. That's it. 5 concepts, 80% of what you'll use daily. The value of Docker: "It works on my machine" becomes irrelevant. Your container runs identically everywhere. Comment if you've been avoiding Docker — no judgment. We all have. #Docker #DevOps #Developer #CloudComputing #TechFinSpecial
To view or add a comment, sign in
-
-
🚀 Update: DevOps Memory Assistant (Building in Public) Quick progress update on my project 👇 After setting up the backend and database, I’ve now added: 🔍 Search functionality Now the tool can: ✅ Store DevOps issues (error, cause, fix) ✅ Retrieve past issues instantly using search Example: Facing "CrashLoopBackOff" again? → Just search and get your previous solution instead of debugging from scratch. Tech used: Go (Backend) PostgreSQL (Database) Next I’m planning: AI-based suggestions for similar errors Simple UI (frontend) CLI tool for faster usage This project is helping me understand backend systems much deeper. Would love feedback or suggestions 🙌 🔗 GitHub: https://lnkd.in/dPdtvmgv #DevOps #Kubernetes #Golang #BuildInPublic #LearningInPublic
To view or add a comment, sign in
-
⚠️ Problem I keep seeing in backend projects… Applications work fine in development… But in production: ❌ Random downtime ❌ Slow performance under load ❌ Messy deployments ❌ डर लगता है code update करने में --- 💡 What’s actually wrong? Most projects are built without thinking about: - Deployment strategy - Scalability - System reliability - Automation --- ✅ What actually I can provide solution on this : 🔹 Dockerized applications (same environment everywhere) 🔹 Setup CI/CD pipelines (automated & safe deployments) 🔹 Configured proper server setup (Nginx + Gunicorn/Uvicorn) 🔹 Focused on zero/minimum downtime deployments --- 📈 Result: ✔️ Faster and stress-free deployments ✔️ More stable applications in production ✔️ Easy rollback when something breaks ✔️ Better performance & reliability --- Now whenever I build or work on a backend system, I don’t just think like only developer — But also think like building automate system and solve the realtime problem. --- If you're facing similar issues while scaling or deploying your app, happy to exchange ideas 🤝 #DevOps #Backend #Python #Django #FastAPI #FastAapI #AWS #GCP #Docker #ScalableSystems #Backups
To view or add a comment, sign in
-
Every developer has been there. It's 11 PM. You push your code. The CI/CD pipeline fails. You open the logs, 500 lines of messy, cryptic output. You spend the next 45 minutes just figuring out what went wrong. Now multiply that by a team of 10 engineers. Every day. 😩 That's hours of engineering time burned, not building features, not shipping products, just reading error logs. I thought, what if a bot could do that for you? So I built one. 🛠️ When a pipeline fails, my bot wakes up, reads the error logs, understands what went wrong, and posts a diagnosis with a fix, right there on your Pull Request. No dashboard to check. No tool to open. The answer shows up exactly where you're already looking. ✅ Root cause ✅ Explanation ✅ Fix ✅ Prevention Done in seconds. Here's what's under the hood 👇 🔹 𝗚𝗶𝘁𝗛𝘂𝗯 𝗔𝗰𝘁𝗶𝗼𝗻𝘀 runs the full CI/CD pipeline — lint, test, build, deploy. When it fails, a webhook fires. 🔹 𝗔𝗪𝗦 𝗔𝗣𝗜 𝗚𝗮𝘁𝗲𝘄𝗮𝘆 catches that webhook and routes it to a Lambda function. 🔹 𝗗𝗼𝗰𝗸𝗲𝗿 That Lambda runs as a container. Multi-stage build, optimised for fast cold starts. Same image locally and in production. 🔹 The container pulls failure logs from GitHub, cleans the noise (ANSI codes, timestamps, debug markers), and extracts only the lines that matter. 🔹 𝗔𝗪𝗦 𝗕𝗲𝗱𝗿𝗼𝗰𝗸 — A structured prompt with logs, code diff, and workflow context is sent to Claude. The AI returns a precise diagnosis. 🔹 The diagnosis gets posted as a 💬 PR comment through the GitHub API. 🔹 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 — Every piece of AWS infrastructure (Lambda, API Gateway, ECR, IAM) is provisioned as code. Reproducible. Version controlled. One command recreates the entire stack. 🔹 Two separate CI/CD pipelines: one for infra, one for app code. Push to main → everything deploys automatically. 🚀 No frontend. No database. Just clean backend automation that solves a real problem. The best developer tools are the ones you don't even notice, because they just work quietly in the background. 🤫 This project was also a personal milestone for me 🎯 I've been working across full-stack development, cloud infrastructure, and DevOps, and with this build, I've extended that into AI and LLM engineering. Not theoretical ML, but practical, production-grade AI integration that solves an actual engineering problem. ☁️ Cloud + 🔧 DevOps + 🤖 AI — That's where I'm headed. Check it out 👉 https://lnkd.in/enga94Ct #DevOps #AWS #AI #LLM #CICD #Terraform #Docker #GitHubActions #CloudEngineering #BuildInPublic
To view or add a comment, sign in
-
OpenTelemetry just got simpler — and that’s a big win If you’ve used OpenTelemetry, you know the struggle: too many env variables, scattered configs, and inconsistent setups. Now, with Declarative Configuration, things are changing. -Define traces, metrics, and logs in a single YAML file -Consistent config across languages -Easier to manage, version, and share Why this matters This moves observability from: “something we add later” ➡️ to “something we design upfront” Cleaner configs = better systems. My takeaway: We’re moving from config chaos → config clarity If you're into #OpenTelemetry #DevOps #Observability this is worth exploring. https://lnkd.in/gVdGUB4x https://lnkd.in/gi73_jak
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development