Day 13 - Your server costs $0 when nobody's using it. That's not a bug — that's serverless. 🚀TechFromZero Series - LambdaFromZero This isn't a Hello World. It's a real serverless image processing pipeline: 📐 Client → API Gateway → Lambda handler(event, context) → Pillow → base64 response 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/dTwhP4Ty 🧱 What I built (step by step): 1️⃣ Project scaffold — Python venv, Pillow, Flask, pytest 2️⃣ Sample image — generated programmatically with Pillow (no downloads) 3️⃣ First Lambda handler — the handler(event, context) contract 4️⃣ Image processor — resize with base64 encoding/decoding 5️⃣ More operations — thumbnail, grayscale, rotate, and blur 6️⃣ Wire handler to processor — event parsing, validation, error responses 7️⃣ Test events — JSON files that simulate real API Gateway requests 8️⃣ Local server — Flask app that does exactly what API Gateway does 9️⃣ S3 trigger handler — auto-process images on upload 🔟 Unit tests — 32 pytest tests for handler and processor 1️⃣1️⃣ SAM template — real AWS deployment config (no account needed to learn) 1️⃣2️⃣ Documentation — README with architecture, quick start, and step guide 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn AWS Lambda by reading real code — with full clarity on each step. No AWS account needed. Everything runs locally. The Flask server simulates API Gateway, test events simulate S3 triggers, and pytest verifies it all works. 👉 If you're a beginner learning serverless, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. 🔥 This is Day 13 of a 50-day series. A new technology every day. Follow along! 🌐 See all days: https://lnkd.in/dhDN6Z3F #TechFromZero #Day13 #AWSLambda #Serverless #Python #Pillow #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch #AWS #CloudComputing #FaaS
Serverless Image Processing with AWS Lambda and Python
More Relevant Posts
-
Just published a new article on a hidden AWS Lambda cost issue I recently ran into. Old Lambda versions with SnapStart enabled were quietly adding to our bill, even though they were no longer being used. To fix this, I built a human-in-the-loop cleanup workflow using AWS Lambda Durable Functions. Read here: Medium: https://lnkd.in/gQe6HRbV AWS Builder Center: https://lnkd.in/ge9g4qhK GitHub: https://lnkd.in/g2gZ8Hbh #AWS #Lambda #SnapStart #Python #Serverless #DurableFunctions
To view or add a comment, sign in
-
https://lnkd.in/daBS4Bm4 is an open-source unified and distributed multimodal computation framework Sail is available as a #Python package on #PyPI. You can install it along with #PySpark in your Python environment. Sail is designed to be compatible with Spark 3.5.x, #Spark 4.x, and later versions. Existing PySpark code works out of the box once you connect your Spark client session to Sail over the Spark Connect protocol.
To view or add a comment, sign in
-
Every Python dev on AWS runs into this at some point: Three Lambda functions, all sharing the same Pydantic models. You copy-paste once. Then twice. Then you spend a Tuesday figuring out why one function is missing a field. Google it, and every article from 2019 gives the same answer: "Just use Lambda Layers!" But Lambda Layers are not a package manager. Yan Cui said it well back in 2021: "Lambda Layer is a poor substitute for existing package managers." No IDE autocomplete, no proper versioning, and if someone deletes a layer version, your deploys break until you fix the ARN. There is a cleaner way: uv workspaces with local packages, bundled into self-contained ZIPs by CDK. Each function gets only what it needs. Normal Python packaging, no AWS-specific workarounds. Full blog post with the CDK construct (Demo GitHub Repository Link included): 👉 https://lnkd.in/d2A8AqAv PS: One thing I want to mention regarding my blog posts in general: I don't want to write another "Hello World with AWS Lambda" tutorial. There are enough of those. What I find interesting are the edge cases - things that are barely documented but matter a lot once you run real workloads in production. #AWS #Python #Serverless #CDK
To view or add a comment, sign in
-
Day 7 - I built a live Weather API in Python. It runs for free. Forever. ⠀ 🚀TechFromZero Series - FastAPIFromZero ⠀ This isn't a Hello World. It's a real async REST API with a cloud database: 📐 OpenWeather → FastAPI (Render) → motor → MongoDB Atlas (free) ⠀ 🌐 Try it live: https://lnkd.in/dfrggisJ ⠀ 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/d8S2AmcP ⠀ 🧱 What I built (step by step): 1️⃣ Project scaffold — venv, requirements.txt, .env.example 2️⃣ FastAPI app — CORS, lifespan hook, health check endpoint 3️⃣ Async MongoDB connection — motor client, shared connection pool 4️⃣ Pydantic schemas — auto-validate input, auto-generate Swagger docs 5️⃣ OpenWeatherMap client — async httpx fetch, clean error handling 6️⃣ Weather endpoints — POST /weather, GET /weather, filter by city, delete, aggregation stats 7️⃣ Render deploy config — render.yaml, $PORT, env vars as code 8️⃣ README — full beginner guide with architecture diagram ⠀ 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn FastAPI by reading real code — with full clarity on each step. ⠀ 👉 If you're a beginner learning FastAPI, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. ⠀ 🔥 This is Day 7 of a 50-day series. A new technology every day. Follow along! ⠀ 🌐 See all days: https://lnkd.in/dhDN6Z3F ⠀ #TechFromZero #Day7 #FastAPI #Python #MongoDB #MongoDBAtlas #Render #AsyncPython #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch
To view or add a comment, sign in
-
-
I recently expanded a Packt open-source microservices codebase (Flask + Python) and turned it into a deeper learning project around distributed systems architecture. Here's what I built on top of the original: → Gave each service its own isolated SQLite3 database (User, Product, Order) - no shared tables, no hidden coupling → Extended the Order Service schema to support order items with unit price snapshots, so historical accuracy is preserved even if prices change later → Designed schemas that can scale independently: each service owns its data contract and can swap to PostgreSQL with zero impact on the others → Containerized all 4 services with Docker Compose, each with its own volume mount for persistence The thing that clicked for me while doing this: A service boundary is a change boundary. If two things always deploy together, change together, and break together, they're not two services. They're one service pretending to be two, with extra network hops and failure points in between. I also learned why bad decomposition is worse than a monolith. Splitting by technical layer (DatabaseService / APIService / LogicService) adds distributed complexity with zero business value. The right split is always along business capabilities - what the business actually does, not how the code is organized. Concepts I got hands-on with: - Database per Service pattern - Service-Oriented Architecture (SOA) - Synchronous (REST) vs asynchronous (event-driven) service communication - Bounded contexts from Domain-Driven Design - Independent deployment pipelines per service If you're getting started with microservices, this pattern reference from Chris Richardson is the best mental model I've found: https://lnkd.in/gKcyNcEj Original repo from PacktPublishing: https://lnkd.in/gENvsSej The version I worked on: https://lnkd.in/gf_HxcyX Building this from scratch (well, expanding it from scratch) made distributed systems a lot less abstract. Highly recommend it as a learning project if you're coming from a Flask/FastAPI background. #Microservices #Python #Flask #Docker #DistributedSystems #SystemDesign #BackendDevelopment #SoftwareEngineering #LearningInPublic
To view or add a comment, sign in
-
-
How many unused Lambda functions are running in your AWS account right now? I built a simple Python script that helped identify 53 wasteful functions within minutes,highlighting gaps in visibility and cost optimization in serverless setups. Take a look: https://lnkd.in/d9evji-B
To view or add a comment, sign in
-
🗓️ Day 47/100 — 100 Days of AWS & DevOps Challenge Today: containerize a Python application. Write the Dockerfile, build the image, run the container. Dockerfile: FROM python:3.11-slim WORKDIR /app COPY src/requirements.txt . RUN pip install --no-cache-dir -r requirements.txt COPY src/ . EXPOSE 8085 CMD ["python", "server.py"] $ sudo docker build -t nautilus/python-app . $ sudo docker run -d --name pythonapp_nautilus -p 8097:8085 nautilus/python-app $ curl http://localhost:8097/ This Dockerfile looks simple, but the ordering of instructions is deliberate — and it's the difference between a build that takes 30 seconds and one that takes 5 minutes. Why COPY requirements.txt comes before COPY src/: Docker builds images layer by layer. Each instruction creates a cached layer. If the input to a layer hasn't changed, Docker reuses the cache and skips rebuilding it. Layer 4: RUN pip install -r requirements.txt → Only rebuilds when requirements.txt changes → Cached on every code-only change ✅ Layer 5: COPY src/ . → Rebuilds on every code change → But layer 4 is already cached, so pip install is skipped ✅ If you copy all files first then install dependencies, any single code change invalidates the pip install cache — and you reinstall all packages on every build. In active development that's constant unnecessary work. The ordering fix is two lines swapped. The impact is significant. CMD ["python", "server.py"] — exec form matters: CMD python server.py wraps in /bin/sh -c, making sh PID 1. When docker stop sends SIGTERM, it goes to sh — which may not forward it to Python. Your app doesn't shut down gracefully. docker kill follows with SIGKILL. CMD ["python", "server.py"] makes Python PID 1 directly. SIGTERM arrives at Python. Graceful shutdown works. One bracket difference, significant operational consequence. Full Dockerfile optimization guide + Q&A on GitHub 👇 https://lnkd.in/gzRh2E2k #DevOps #Docker #Python #Containers #Dockerfile #100DaysOfDevOps #KodeKloud #LearningInPublic #CloudEngineering #CICD #Flask #FastAPI
To view or add a comment, sign in
-
The next step after Karpathy's LLM wiki idea: . . Karpathy's wiki works on knowledge that sits still. A page on how attention works is just as useful today as it was a year ago. The LLM reads sources, pulls out ideas, writes clean articles, and keeps them cross-linked. You never have to rebuild the context from scratch when you want to ask something. But this breaks the moment you ask a question that spans multiple things at once. "Which authors moved from Google to Anthropic between 2022 and 2024, and what did they publish after the move?" A Markdown page can't answer that. The answer lives in the connections between people, companies, papers, and dates. A wiki can describe that pattern only if someone already wrote an article about it. A graph lets you ask it directly, ask ten variations of it, and get an answer every time without rebuilding anything. FalkorDB is an open-source graph database built for exactly this kind of question. The idea underneath it is simple, and it's what makes the whole thing fast enough to be practical. Most graph databases store connections as chains of pointers and follow them one by one through memory. FalkorDB stores the entire graph as a grid of zeros and ones (a sparse matrix) where a 1 means "these two things are connected." Once your graph is a grid, walking through it becomes math. Two hops is one multiplication. Five hops is five multiplications. That sounds like a small change, but it lets the CPU do work in parallel and reuse decades of math research that nobody had applied to graph queries before. In practice, this is the difference between a seven-hop question returning in 350ms and the same question timing out. The wiki and the graph aren't competing. They sit at different layers. The wiki stores what something is. The graph stores how everything connects. Any work where the connections matter as much as the things being connected belongs in a graph. FalkorDB also comes with vector search built in, which matters for GenAI work. You can find a relevant part of the graph, search for similar items inside it, and return the answer, all in one query. Most GraphRAG setups build this by hand across two separate databases. Here you get it in one. You run it through Docker, query it with Cypher, and connect from Python, JavaScript, Rust, Java, Go, or any Redis client. Open source and multi-tenant by default, so one instance can host thousands of separate graphs without spinning up thousands of servers. Link to the repo in the first comment. Karpathy nailed the foundation. The next layer is here. ____ Share this with your network if you found this insightful ♻️ Follow me (Akshay Pachaar) for more insights and tutorials on AI and Machine Learning!
To view or add a comment, sign in
-
The next step after Karpathy's LLM wiki idea: . . Karpathy's wiki works on knowledge that sits still. A page on how attention works is just as useful today as it was a year ago. The LLM reads sources, pulls out ideas, writes clean articles, and keeps them cross-linked. You never have to rebuild the context from scratch when you want to ask something. But this breaks the moment you ask a question that spans multiple things at once. "Which authors moved from Google to Anthropic between 2022 and 2024, and what did they publish after the move?" A Markdown page can't answer that. The answer lives in the connections between people, companies, papers, and dates. A wiki can describe that pattern only if someone already wrote an article about it. A graph lets you ask it directly, ask ten variations of it, and get an answer every time without rebuilding anything. FalkorDB is an open-source graph database built for exactly this kind of question. The idea underneath it is simple, and it's what makes the whole thing fast enough to be practical. Most graph databases store connections as chains of pointers and follow them one by one through memory. FalkorDB stores the entire graph as a grid of zeros and ones (a sparse matrix) where a 1 means "these two things are connected." Once your graph is a grid, walking through it becomes math. Two hops is one multiplication. Five hops is five multiplications. That sounds like a small change, but it lets the CPU do work in parallel and reuse decades of math research that nobody had applied to graph queries before. In practice, this is the difference between a seven-hop question returning in 350ms and the same question timing out. The wiki and the graph aren't competing. They sit at different layers. The wiki stores what something is. The graph stores how everything connects. Any work where the connections matter as much as the things being connected belongs in a graph. FalkorDB also comes with vector search built in, which matters for GenAI work. You can find a relevant part of the graph, search for similar items inside it, and return the answer, all in one query. Most GraphRAG setups build this by hand across two separate databases. Here you get it in one. You run it through Docker, query it with Cypher, and connect from Python, JavaScript, Rust, Java, Go, or any Redis client. Open source and multi-tenant by default, so one instance can host thousands of separate graphs without spinning up thousands of servers. Link to the repo in the first comment. Karpathy nailed the foundation. The next layer is here.
To view or add a comment, sign in
-
I reduced my API response time from 2.3s to 140ms. No Redis. No CDN. No caching layer. Just 4 changes to my Django REST Framework setup that most tutorials never mention. N+1 queries everywhere. My serializer accessed post.author.name on every row. 100 posts = 101 database queries. One select_related('author') brought it down to 1. Response time: 2.3s to 800ms instantly. Using ModelSerializer for read endpoints. ModelSerializer builds fields dynamically on every request. It's up to 377x slower than raw Python dicts. Switched read-only endpoints to serializers.Serializer with explicit fields. Another 40% gone. No pagination on list endpoints. Returning the entire table. 10,000 rows. Every request. Added CursorPagination, constant-time queries regardless of dataset size. OFFSET-based pagination breaks at high page numbers. Cursor doesn't. Fetching fields I never used. Serializer returned 15 fields. Frontend used 6. Added .only() and trimmed the serializer. 2.3s to 140ms. Same server. Same database. Same $12/month VPS. The bottleneck was never my infrastructure. It was my code. Run queryset.explain(analyze=True) on your slowest endpoint. You'll probably find the same mistakes. Which of these have you tried? #Django #Python #API #WebPerformance #BuildInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development