Cut Docker Image Size by 70% with Multi-Stage Builds

𝘼 𝟐𝙂𝘽 𝘿𝙤𝙘𝙠𝙚𝙧 𝙞𝙢𝙖𝙜𝙚 𝙞𝙨 𝙖 𝙙𝙚𝙥𝙡𝙤𝙮𝙢𝙚𝙣𝙩 𝙗𝙤𝙩𝙩𝙡𝙚𝙣𝙚𝙘𝙠. I was building a GenAI API and the image size was massive. Every deploy took forever. Then I switched to multi-stage builds. Here is the exact snippet that cut the size by 70%: # 𝘚𝘵𝘢𝘨𝘦 1: 𝘉𝘶𝘪𝘭𝘥 𝘍𝘙𝘖𝘔 𝘱𝘺𝘵𝘩𝘰𝘯:3.10-𝘴𝘭𝘪𝘮 𝘈𝘚 𝘣𝘶𝘪𝘭𝘥𝘦𝘳 𝘞𝘖𝘙𝘒𝘋𝘐𝘙 /𝘢𝘱𝘱 𝘊𝘖𝘗𝘠 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘮𝘦𝘯𝘵𝘴.𝘵𝘹𝘵 . 𝘙𝘜𝘕 𝘱𝘪𝘱 𝘪𝘯𝘴𝘵𝘢𝘭𝘭 --𝘵𝘢𝘳𝘨𝘦𝘵=/𝘢𝘱𝘱/𝘥𝘦𝘱𝘴 -𝘳 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘮𝘦𝘯𝘵𝘴.𝘵𝘹𝘵 # 𝘚𝘵𝘢𝘨𝘦 2: 𝘙𝘶𝘯 𝘍𝘙𝘖𝘔 𝘱𝘺𝘵𝘩𝘰𝘯:3.10-𝘢𝘭𝘱𝘪𝘯𝘦 𝘞𝘖𝘙𝘒𝘋𝘐𝘙 /𝘢𝘱𝘱 𝘊𝘖𝘗𝘠 --𝘧𝘳𝘰𝘮=𝘣𝘶𝘪𝘭𝘥𝘦𝘳 /𝘢𝘱𝘱/𝘥𝘦𝘱𝘴 /𝘢𝘱𝘱/𝘥𝘦𝘱𝘴 𝘊𝘖𝘗𝘠 . . 𝘌𝘕𝘝 𝘗𝘠𝘛𝘏𝘖𝘕𝘗𝘈𝘛𝘏=/𝘢𝘱𝘱/𝘥𝘦𝘱𝘴 𝘊𝘔𝘋 ["𝘱𝘺𝘵𝘩𝘰𝘯", "𝘢𝘱𝘱.𝘱𝘺"] The logic is simple: • 𝙎𝙩𝙖𝙜𝙚 𝟏 installs dependencies in a full environment. • 𝙎𝙩𝙖𝙜𝙚 𝟐 copies only the artifacts needed to run. No build tools. No cache. Just the app. Smaller images mean faster scaling and cheaper storage. 𝘼𝙧𝙚 𝙮𝙤𝙪 𝙨𝙩𝙞𝙡𝙡 𝙪𝙨𝙞𝙣𝙜 𝙨𝙞𝙣𝙜𝙡𝙚-𝙨𝙩𝙖𝙜𝙚 𝙗𝙪𝙞𝙡𝙙𝙨 𝙛𝙤𝙧 𝙝𝙚𝙖𝙫𝙮 𝙖𝙥𝙥𝙨? #Docker #DevOps #Python #PlatformEngineering #ShreyasTech

  • No alternative text description for this image

Warning if you use Alpine: You might need to install gcc and musl-dev if your Python packages (like numpy/pandas) need to compile C extensions. It's a trade-off between size and build complexity.

Like
Reply

To view or add a comment, sign in

Explore content categories