The Python packaging world is… vast. One of the most delightful recent developments is uv, a blazing-fast package and environment manager from Astral. It’s fast. It’s simple. And when you pair it with Coiled, running Python scripts on the cloud becomes just as seamless. With uv + Coiled, you can: - Declare script-specific dependencies directly in your Python file (`uv add --script`) - Specify runtime config (container, region, hardware) with inline # COILED comments `uvx coiled batch run uv run process.py` Prefer a CLI-only approach? You can also do all this in a single command from your terminal: uvx coiled batch run \ --region us-east-2 \ --container ghcr.io/astral-sh/uv:debian-slim \ uv run \ --with "pandas pyarrow s3fs" \ process.py Compare that to something like AWS Lambda or AWS Batch, where you’d typically need to: - Package your script and dependencies into a ZIP file or build a Docker image - Configure IAM roles, triggers, and permissions - Handle versioning, logging, or hardware constraints With Coiled, there’s: - No YAML jungle - No clicking around in the AWS console - No K8s Just Python on the cloud without the overhead. Check out this demo from James Bourbeau to learn more.
More Relevant Posts
-
If you have AWS Lambda functions running on Python 3.9, it's time to plan your upgrade. AWS is ending support for the Python 3.9 runtime in Lambda starting December 15, 2025. This is right after Python 3.9's official EOL(End Of Life) on October 30, 2025. What you need to know about the timeline: • Dec 15, 2025: Lambda will stop applying security patches and updates to Python 3.9. Functions will still run, but they will be on an unsupported runtime. Hence you it will be your sole responsibility to keep your functions secure. • Feb 3, 2026: You will no longer be able to create new functions using Python 3.9. • March 9, 2026: You will no longer be able to update existing functions using Python 3.9. My recommendation? Don't wait until the deadlines. Upgrade your functions to the latest supported Python runtime (e.g., Python 3.12 or newer) as soon as possible. It ensures your functions remain secure, supported, and performant. You can check your impacted functions in the AWS Health Dashboard or use the AWS CLI command below for a full list, including published versions: aws lambda list-functions --region us-east-1 --output text --query "Functions[?Runtime=='python3.9'].FunctionArn"
To view or add a comment, sign in
-
-
🚀 Python 3.9 is now End-of-Life — An Upgrade You Can’t Ignore 🐍 Python 3.9 officially reached End-of-Life (EOL) in October 2025 — meaning no further bug fixes, performance updates, or security patches. ⚠️ If your AWS Lambda or Jenkins pipelines still rely on Python 3.9, migrate now to avoid runtime issues. 📌 Why Migration Matters 1. AWS will deprecate the Python 3.9 runtime. 2. No more security updates = higher production risk. 3. Major libraries like Pandas, SQLAlchemy, NumPy have dropped support. Migration Steps 1️⃣ Update Lambda runtime → Python 3.13 2️⃣ Validate dependencies for compatibility 3️⃣ Update CI/CD pipelines (Jenkins) 4️⃣ Rebuild in lower environments before production Why Python 3.13? Active support until 2029, better performance, memory efficiency, and async handling. Read the full breakdown here:
To view or add a comment, sign in
-
Azure Blob Storage with Python SDK 💻🚨 The goal of the project is to communicate 🛜 a python API with azure blob, load, read, display and preprocess blobs. Day 1 1. Creating keyvaults, to store keys and secrets 2. Creating the main service it allows us to communicate our local API in python with azure. environment variables: AZURE_CLIENT_ID=<client_id> AZURE_TENANT_ID=<tenant_id> AZURE_CLIENT_SECRET=<secret> AZURE_VAULT_URL=<key_vault_url> AZURE_STORAGE_URL=<storage_acount_url> ----------------------python------------------- from azure.identity import ClientSecretCredential from azure.keyvault.secrets import SecretClient from dotenv import load_dotenv import os load_dotenv() client_id = os.environ['AZURE_CLIENT_ID'] tenant_id = os.environ['AZURE_TENANT_ID'] client_secret = os.environ['AZURE_CLIENT_SECRET'] vault_url = os.environ["AZURE_VAULT_URL"] secret_name = "your_secret" # create a credential credentials = ClientSecretCredential( client_id = client_id, client_secret= client_secret, tenant_id= tenant_id ) # create a secret client object secret_client = SecretClient(vault_url= vault_url, credential= credentials) # retrieve the secret value from key vault secret = secret_client.get_secret(secret_name) print("The secret value is :" + secret.value) #Azure #BlobStorage #Python #DataEngineering
To view or add a comment, sign in
-
-
Mastering Docker Volumes with Python One of the biggest advantages of Docker is data persistence. Even if your container is deleted, your data doesn’t have to be lost! Here’s a simple workflow I built: 1️⃣ Create a Docker container with a volume attached. 2️⃣ Use a Python program inside the container to write data into a file stored in the mounted volume. 3️⃣ Delete the container . 4️⃣ Re-run a new container with the same volume, and voila — your data is still there Docker Commands: # Create a container with volume docker run -it --name mycontainer -v myvolume:/data python:3.10 bash # Run Python script inside container python save_data.py # Exit and remove container docker rm -f mycontainer # Run a new container with the same volume docker run -it -v myvolume:/data python:3.10 bash cat /data/mydata.txt You’ll see your file and data still intact even though the container is gone. This is how Docker Volumes ensure persistent storage across containers. Key Takeaway: Containers are ephemeral, but volumes are persistent. Perfect for databases, logs, configs, or any data you don’t want to lose. Are you using Docker volumes in your projects yet? If yes, what’s your go-to use case? #Docker #Python #DevOps #Containerization #Volumes #DataPersistence
To view or add a comment, sign in
-
-
Requests worked fine for years. Then async happened. Now 100 HTTP calls take 0.19 seconds instead of 10. The gap isn't small. I've watched this shift happen in real-time. HTTPX isn't just another Python library. It's solving problems that Requests simply can't. The numbers tell the story: • HTTPX: 100 concurrent requests in 0.19 seconds • Requests: Same task takes over 10 seconds • Even in sync mode, HTTPX runs nearly twice as fast But speed isn't everything. HTTPX brings features that matter: 🚀 Native async/await support 🔗 HTTP/2 capabilities ⚡ Better connection pooling 🎯 Drop-in compatibility with Requests The Django Rest Framework team built this. They know what modern Python applications need. Requests still works great for simple tasks. But if you're building anything that handles multiple HTTP calls, HTTPX makes sense. One library. Both sync and async. Future-proof. The performance difference in concurrent operations isn't marginal. It's an order of magnitude better. What's holding you back from making the switch? #Python #AsyncProgramming #Python #Async #WebDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲: https://lnkd.in/eV2KkxUR
To view or add a comment, sign in
-
Requests worked fine for years. Then async happened. Now 100 HTTP calls take 0.19 seconds instead of 10. The gap isn't small. I've watched this shift happen in real-time. HTTPX isn't just another Python library. It's solving problems that Requests simply can't. The numbers tell the story: • HTTPX: 100 concurrent requests in 0.19 seconds • Requests: Same task takes over 10 seconds • Even in sync mode, HTTPX runs nearly twice as fast But speed isn't everything. HTTPX brings features that matter: 🚀 Native async/await support 🔗 HTTP/2 capabilities ⚡ Better connection pooling 🎯 Drop-in compatibility with Requests The Django Rest Framework team built this. They know what modern Python applications need. Requests still works great for simple tasks. But if you're building anything that handles multiple HTTP calls, HTTPX makes sense. One library. Both sync and async. Future-proof. The performance difference in concurrent operations isn't marginal. It's an order of magnitude better. What's holding you back from making the switch? #Python #AsyncProgramming #Python #Async #WebDevelopment 𝗦𝗼𝘂𝗿𝗰𝗲: https://lnkd.in/eV2KkxUR
To view or add a comment, sign in
-
Handling Large JSON Files in Python: Efficient Read, Write, and Update Strategies Hello, I'm Maneshwar. I'm working on FreeDevTools online currently building *one place for all dev tools, cheat codes, and TLDRs* — a free, open-source hub where developers can quickly find and use tools without any hassle of searching all over the internet. Working with JSON is common in data engineering, logging, and APIs. But what happens when your JSON file isn’t a neat 2 KB config, but a monster with 14 lakh lines (1.4 million LOC)? If you try to load it in Python with json.load(), you’ll likely run into memory errors. If you attempt a direct seek + write, you risk corrupting the structure. Large JSON files demand a different strategy. Let’s explore the best ways to handle massive JSON files in Python. JSON is not append-friendly – It’s usually one giant object/array. Changing a single element can shift the rest of the file. Memory consumption – Parsing the entire file at once may exceed system memory. In-place edits are fragile – Unless your update is the same length in bytes, https://lnkd.in/gBP9drGU
To view or add a comment, sign in
-
🎯Learning to Automate and Optimise with Python! Thrilled to share a recent project that showcases the power of Python automation and code efficiency! This week, I explored how small pieces of Python code can make both data handling and automation incredibly powerful. 📘 Project 1 - Smarter Data Handling with Dictionary & List Comprehensions. 📧 Project 2 - Automating Birthday Emails with SMTP, Datetime & Pandas. I built an application that automates the sending of personalised Birthday Emails to contacts, combining several powerful Python tools. Here's what went into it: 1. Automation with SMTP, Pandas, and Datetime 📅📧 Email Automation: Leveraged the smtplib module to securely connect to an email server and send personalised emails. Data Handling: Used Pandas pd.read_csv to efficiently read and process a .CSV file containing contact information NAME, Email, Birthday. Real-time Logic: Integrated the datetime module to check if a contact's birthday matches the current date, ensuring the right person gets the right email at the right time. 2. Boosting Performance with Python Comprehensions A crucial part of this project involved optimizing data processing: Dictionary Comprehensions: Used dictionary comprehensions to quickly and cleanly map data from the Pandas DataFrame into a dictionary, making it instantly accessible for lookups. List Comprehensions: Employed list comprehensions in a separate exercise to demonstrate generating lists efficiently, emphasizing Python’s idiomatic approach to data manipulation (like the NATO Phonetic Alphabet project). 3. Deploying on the Cloud ☁️ To ensure this ran reliably, I deployed the script on PythonAnywhere. This was key to: Maximizing Usability: Running the code on a cloud server ensures it executes daily without needing my local machine to be on. Practical Application: Gained hands-on experience deploying and scheduling a Python script in a real-world, serverless environment. This project was a great way to solidify my understanding of automating workflows and writing concise, performant Python code. Happy to answer any questions about the implementation! #Python #Automation #SMTP #Pandas #Datetime #PythonAnywhere #Coding #DataScience #SoftwareDevelopment #Efficiency
To view or add a comment, sign in
More from this author
-
October updates: Filestores and sidecars now available for Coiled Batch, upcoming events, and how Double River scales equity trading simulations.
Coiled 6mo -
August updates: lightweight container support, marimo notebooks on the cloud, and how Guac scales demand forecasting to reduce food waste
Coiled 8mo -
July updates: uv support, distributed GPU model training, and how KoBold Metals accelerates mineral discovery with Coiled
Coiled 9mo
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development