Every Python dev on AWS runs into this at some point: Three Lambda functions, all sharing the same Pydantic models. You copy-paste once. Then twice. Then you spend a Tuesday figuring out why one function is missing a field. Google it, and every article from 2019 gives the same answer: "Just use Lambda Layers!" But Lambda Layers are not a package manager. Yan Cui said it well back in 2021: "Lambda Layer is a poor substitute for existing package managers." No IDE autocomplete, no proper versioning, and if someone deletes a layer version, your deploys break until you fix the ARN. There is a cleaner way: uv workspaces with local packages, bundled into self-contained ZIPs by CDK. Each function gets only what it needs. Normal Python packaging, no AWS-specific workarounds. Full blog post with the CDK construct (Demo GitHub Repository Link included): 👉 https://lnkd.in/d2A8AqAv PS: One thing I want to mention regarding my blog posts in general: I don't want to write another "Hello World with AWS Lambda" tutorial. There are enough of those. What I find interesting are the edge cases - things that are barely documented but matter a lot once you run real workloads in production. #AWS #Python #Serverless #CDK
Andreas John’s Post
More Relevant Posts
-
An expert comparison of Flask and FastAPI for Python backends. Learn architectural trade-offs, deployment patterns with Docker and Kubernetes, performance tuning, and business impact for New Zealand projects.
To view or add a comment, sign in
-
🚨 Anthropic accidentally leaked their entire source code yesterday. How to Deploy Anthropic Claude Opus 4.6 For free 😄 Download Repo: https://lnkd.in/eHyiGhpU What this repo contains The project has two build paths: a Python workspace under src/ a Rust port under rust/ So “build this code” can mean either: run the Python version, or compile the Rust version. Option A: Build and run the Python workspace 1) Install Python Make sure you have Python 3 installed. Check: python3 --version 2) Go into the repo folder cd claw-code 3) Inspect the Python workspace The README says the active Python code is in src/ and tests are in tests/. You can confirm: ls ls src ls tests 4) Run the main Python entrypoint The README shows these commands: python3 -m src.main summary python3 -m src.main manifest python3 -m src.main subsystems --limit 16 These are the first commands to try because src.main is the CLI entrypoint. 5) Run tests Use the verification command from the README: python3 -m unittest discover -s tests -v That checks whether the Python workspace is functioning as expected. 6) Run additional Python inspection commands The README also lists: python3 -m src.main parity-audit python3 -m src.main commands --limit 10 python3 -m src.main tools --limit 10 Use those after the basic commands work. Option B: Build the Rust port 1) Install Rust You need Cargo and Rust. Check: rustc --version cargo --version 2) Enter the Rust folder cd claw-code/rust 3) Build in release mode The README gives the exact build command: cargo build --release That is the official Rust build step for this repo. 4) Run the built binary The README says there is a crates/claw-cli crate, so after building, the binary is likely under: ./target/release/claw-cli If that name does not exist, inspect the release folder: ls target/release The CLI crate is identified in the README as crates/claw-cli, which strongly suggests the executable will be named claw-cli. Recommended build order If you are new to the repo, do it in this order: Path 1: easiest start git clone https://lnkd.in/eHyiGhpU cd claw-code python3 -m src.main summary python3 -m unittest discover -s tests -v Path 2: compile Rust cd rust cargo build --release This order makes sense because the README says the repo is now “Python-first,” while the Rust workspace is the systems-language port. Full step-by-step from scratch # 1) Clone git clone https://lnkd.in/eHyiGhpU cd claw-code # 2) Check Python python3 --version # 3) Try the Python CLI python3 -m src.main summary python3 -m src.main manifest python3 -m src.main subsystems --limit 16 # 4) Run tests python3 -m unittest discover -s tests -v # 5) Optional: inspect parity and inventories python3 -m src.main parity-audit python3 -m src.main commands --limit 10 python3 -m src.main tools --limit 10 # 6) Build Rust version cd rust cargo build --release # 7) Inspect compiled binaries ls target/release
To view or add a comment, sign in
-
Your Django app went from 200MB to 8GB RAM usage in three weeks. Memory leaks don't crash dramatically—they creep up slowly until your servers start swapping and alerts start screaming. This guide shows you how to profile Python applications in production using memory_profiler and tracemalloc without causing downtime or performance impact. Learn to catch circular references, global variable accumulation, and resource leaks before they kill your application. #Python #DevOps #PerformanceOptimization #Django Learn More: https://lnkd.in/eWe2bRhT
To view or add a comment, sign in
-
This is bigger than it looks. First, Understand the Problem. You buy a powerful server with 10 CPU cores. You build a Python API. You deploy it. Python uses 1 core. The other 9 sit there. Idle. Doing nothing. You just paid for 10, got 1. This wasn't a bug. It was a design decision from the 1990s called the GIL — Global Interpreter Lock. A rule that said: only ONE thread runs at a time, no matter how many cores you have. Why did it exist? It made Python safer and simpler to build back then. Memory management was easier when only one thing ran at a time. It was a smart tradeoff — for 1991. For 2025? Not so much. Since Python couldn't use multiple cores in one process, the solution was: → Run 10 separate Python processes instead of 10 threads → Each process gets its own RAM, its own startup time, its own everything → 10 processes × 500MB RAM = 5GB just to use the machine you already paid for It worked. But it was expensive, wasteful, and messy. Teams switched to Go or Node.js specifically because of this. What Actually Changed ? 🔹 Python 3.13 (October 2024) → Free-threaded build introduced. Experimental. 🔹 Python 3.14 (2025) → Free-threaded officially supported. No longer experimental. Still optional. Note: The GIL hasn't been deleted forever. It's been made OPTIONAL. You choose to disable it. This was a deliberate, careful decision — the Python team didn't want to break the entire ecosystem overnight. FastAPI 0.136.0 now officially supports running on this free-threaded Python. So What Does This Actually Mean? Remember that 10-core machine? With free-threaded Python, FastAPI can now actually use those 10 cores — inside a single process — running threads in true parallel. Real benchmark numbers: → 5 threads on standard Python (with GIL): same speed as 1 thread. No improvement. → 5 threads on free-threaded Python (no GIL): 4.8x faster. In practical terms for your API: → Same traffic, fewer servers needed → Fewer servers = less RAM, less cost, less complexity → Response times improve under heavy load → Scaling becomes a choice, not a survival requirement ━━━ Who Should Pay Attention? ━━━ If you're building: 🔹 ML inference APIs — running a model on every request 🔹 Data processing endpoints — transforming, aggregating, scoring 🔹 Real-time pipelines — processing events as they come 🔹 Document parsing — PDFs, contracts, files at volume 🔹 Any API that actually computes something, not just fetches from a DB The GIL was also acting as an invisible safety net — it prevented two threads from touching the same data at the same time accidentally. Without it, if two threads modify the same variable simultaneously — you can get corrupted data or crashes. These bugs are hard to reproduce and painful to debug. The gains are real. But they require intentional adoption. If you're building Python APIs, this release deserves more than a scroll. Read the changelog. Test it. The ceiling just got raised. Thank you FastAPI
FastAPI 0.136.0 officially supports: ✨ free-threaded Python 🐍 ✨ (this announcement has no GIL puns) Thanks Sofie, 🍓 Patrick, Nathan, Jonathan 🙌 https://lnkd.in/dvaUFh2F
To view or add a comment, sign in
-
We recently received requests from clients around Python, Machine Learning, and Flask. Before diving into the more complex topics, we decided to start with a solid foundation: building a simple CRUD REST API using Flask and SQLite. No ORMs. No extra dependencies. Just Flask, Python's built-in sqlite3, and Pytest. The goal is straightforward: understand how a REST API actually works from the ground up, before adding layers of abstraction on top of it. It is the kind of foundation that makes everything else easier to learn and easier to debug. In this tutorial, we cover: → Structuring a Flask project with the application factory pattern → Managing database connections per request using Flask's g object → Building five CRUD endpoints with proper validation and error handling → Writing a complete Pytest suite with isolated test databases This is the first article in what will become a series. JWT authentication is coming up next. Read the full tutorial here: https://lnkd.in/g_DwkMDk
To view or add a comment, sign in
-
やばい A developer used OpenAI Codex to rewrite the leaked Claude Code source code in Python, so that storing the code would not violate copyright. https://lnkd.in/gWkVdK6v
To view or add a comment, sign in
-
Just published a new article on a hidden AWS Lambda cost issue I recently ran into. Old Lambda versions with SnapStart enabled were quietly adding to our bill, even though they were no longer being used. To fix this, I built a human-in-the-loop cleanup workflow using AWS Lambda Durable Functions. Read here: Medium: https://lnkd.in/gQe6HRbV AWS Builder Center: https://lnkd.in/ge9g4qhK GitHub: https://lnkd.in/g2gZ8Hbh #AWS #Lambda #SnapStart #Python #Serverless #DurableFunctions
To view or add a comment, sign in
-
🚀 From Learning Python to Understanding Backend Systems When I first started learning Python, my focus was mainly on syntax and basic problem-solving. But as I explored more, I realized that backend development is where logic meets real-world application. That’s when I began diving deeper into the Python backend ecosystem. Instead of just learning tools, I started understanding how backend systems actually work—how requests are processed, how data flows between the server and database, and how APIs connect everything together. 🔧 Tools & Technologies I’m Exploring: • Python for core logic • Django for structured and scalable applications • Flask / FastAPI for lightweight API development • Relational Databases for data management • REST APIs for communication between systems • Git & GitHub for version control • JWT for authentication • Basic backend security practices • Deployment fundamentals 💡 What Changed in My Approach: Earlier, I focused on “what to learn.” Now, I focus on “how things work.” This shift helped me: • Understand backend architecture more clearly • Write better and cleaner code • Think like a developer instead of just a learner I’m still at the beginning of this journey, but I’m consistently building, experimenting, and improving every day. The goal is simple — to become a backend developer who not only writes code, but understands the system behind it. Excited for what’s ahead 🚀 #Python #BackendDevelopment #Django #Flask #FastAPI #RESTAPI #LearningJourney #SoftwareDevelopment #snsinstitutions #snsdesignthinkers #designthinking
To view or add a comment, sign in
-
-
Day 10: Python Code Tools — When Language Fails, Logic Wins 🐍 Welcome to Day 10 of the CXAS 30-Day Challenge! 🚀 We’ve connected our agents to external APIs (Day 9), but what happens when you need to perform complex calculations or multi-step logic that doesn't require a database call? The Problem: The "Calculator" Hallucination LLMs are incredible at understanding context, but they are not calculators. They are probabilistic next-token predictors. If you ask an LLM to calculate a 15% discount on a $123.45 cart total with a weight-based shipping surcharge, it might give you an answer that looks right but is mathematically wrong. In an enterprise environment, "close enough" isn't good enough for billing. The Solution: Python Code Tools In CX Agent Studio, you can empower your agent with deterministic logic by writing custom Python functions directly in the console. How it works: You define a function in a secure, server-side sandbox. The LLM's Role: The model shifts from calculator to orchestrator. It extracts the variables from the conversation (e.g., weight, location, loyalty tier), calls your Python tool, and receives an exact, guaranteed result. Safety First: The code runs in a secure, isolated sandbox, ensuring enterprise-grade security while giving your agent "mathematical superpowers." 🚀 The Day 10 Challenge: The EcoShop Shipping Calculator EcoShop needs a reliable way to quote shipping fees. The rules are too complex for a prompt: Base fee: $5.00 Weight surcharge: +$2.00 per lb for every pound above 5 lbs. International: Flat +$15.00 surcharge. Loyalty: Gold (20% off), Silver (10% off). Your Task: Write the Python function for this logic. Focus on handling the weight surcharge correctly (including fractions of a pound) and applying the loyalty discount to the final total. Stop asking your LLM to do math. Give it a tool instead. 🔗 Day 10 Resources 📖 Full Day 10 Lesson: https://lnkd.in/gGtfY2Au ✅ Day 9 Milestone Solution (OpenAPI): https://lnkd.in/g6hZbtGX 📩 Day 10 Challenge Deep Dive (Substack): https://lnkd.in/g6BM8ESp Coming up tomorrow: We wrap up the week by looking at Advanced Tool Orchestration—how to manage multiple tools without confusing the model. See you on Day 10! #AI #AgenticAI #GenerativeAI #GoogleCloud #Python #LLM #SoftwareEngineering #30DayChallenge #AIArchitect #DataScience #CXAS
To view or add a comment, sign in
-
My Python loop processed 5 reports in 2.5 seconds. After adding one decorator: 0.54 seconds. I changed zero call sites. Here's how it works. A function that's slow because it does real work — database queries, aggregations — gets decorated with @app.direct_task. That's it. The caller doesn't change. Exception handling doesn't change. The return type doesn't change. In development: set one environment variable and the decorator is invisible - tests pass, the function runs inline as it always did. In production: start a worker. The same call now routes to a distributed runner. The caller still blocks and gets the value back directly. No .result. No futures. No refactoring. For parallelism without touching the call site at all: add parallel_func to the decorator with a small helper that describes how to split the input. Pynenc dispatches one task per chunk, collects results, and returns the same type the caller expected. Full write-up + runnable demo (no Docker, no Redis, runs in ~30 seconds): https://lnkd.in/eySD7h_H What's the slowest loop in your codebase right now? #python #backend #distributedsystems #opensource
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
AWS Architect | Blockchain & Web3 Enthusiast
1wDo you know if the same approach can be applied using Poetry instead of UV?