Nothing hurts user experience more than "We're sorry, the service is temporarily unavailable." Especially when it's a third-party API you don't control. Instead of hoping for the best, I wrap all external API calls with this resilient Python decorator. It automatically handles retries with exponential backoff, ensuring your app remains resilient to temporary outages. 💪 Resilience: Survives network blips, API throttling, and temporary outages ⚡ Smart Backoff: Won't overwhelm the API with immediate retries 🔧 Reusable: One decorator protects all your API calls 📝 Logging: See exactly what's failing and when. Pro Tip: Combine this with circuit breakers for enterprise-grade resilience! What's the flakiest API you've ever had to integrate with? War stories welcome in the comments! 🍿 #Python #API #Microservices #Reliability #SoftwareArchitecture #BackendDevelopment #WebDevelopment #Resilience
How to survive API outages with a resilient Python decorator
More Relevant Posts
-
Stop debugging with print() or consoles and start tracing your code. For years, we've relied on logs. But in a busy production app, logs are a chaotic mess. Thousands of "User signed in" and "Payment failed" messages are jumbled together, making it impossible to follow one user's journey. When a user says "it failed," you have to ask "when?" and then grep through a sea of text. I've been working with Logfire (from the creators of Pydantic & FastAPI), and it's a game-changer. The "a-ha!" moment is when you stop thinking in "logs" and start thinking in "traces" and "spans." The attached image can show clearly: ❌ A Log says: "Email already exists." (Who? When? In what request? What happened next?) ✅ A Trace says: "This specific sign-up request (Trace ID: abc-123) failed." When you open it, you see the whole story: POST /signup (Received) check_email_exists (Took 55ms) [STATUS: FAILED] verify_user_photo (NOT RUN) create_user_in_db (NOT RUN) I'm implementing this on my sign-up flow right now. Instead of just logging, I've wrapped each step (check_email_exists, verify_user_photo, etc.) in a logfire.span(). Now, I can get counts, timings, and failure rates for every single "event" in my funnel. I can finally answer questions like: "What percentage of users fail photo verification?" "Is my database write step getting slower?" It's like switching from a blurry photo to a full end-to-end video of every request. If you're still debugging with print() in production, you're flying blind. #Logfire #Observability #Developer #Backend #Python #FastAPI #NodeJS #ExpressJS #DevOps
To view or add a comment, sign in
-
-
Tech With Tim: Learn Fast API With This ONE Project Learn FastAPI With a Real Project Dive into a hands-on tutorial where you build a production-grade FastAPI app that lets users upload, fetch, and manage photos and videos using ImageKit’s API and AI-powered DAM. You’ll see the full architecture in action, from basic route setup to Pydantic models, error handling, and database connections (plus a Streamlit frontend demo). Along the way, you’ll implement CRUD operations, secure endpoints with JWT authentication, handle image/video uploads, and learn real-world patterns (logging, query/path parameters, status codes). If you’ve already got your Python chops and want to level up with a no-fluff, project-based approach, this is your roadmap. Watch on YouTube https://lnkd.in/dfFA6XR5
To view or add a comment, sign in
-
Day 56 – Raising Your Own Exceptions Sometimes it’s you who needs to throw the red flag 🚩 Use raise to enforce your own rules. def withdraw(amount): if amount <= 0: raise ValueError("Withdrawal amount must be positive!") print(f"Withdrawing ${amount}") withdraw(-50) 🧠 Output: ValueError: Withdrawal amount must be positive! 💡 Proactive exceptions prevent bigger bugs later. This is how real-world apps maintain data integrity and user safety. 👉 Where would you use a custom exception in your code? #Python #Debugging #CleanCode #DeveloperLife
To view or add a comment, sign in
-
𝗘𝗡𝗚𝗜𝗡𝗘𝗘𝗥𝗜𝗡𝗚 𝗨𝗣𝗗𝗔𝗧𝗘: 𝟮𝟬𝟮𝟱𝗖𝗪𝟒𝟐 This week was mostly around migration, where the last pieces of the cluster service under progress. All the local and remote storage functions, web handling, WebSocket handling, ignition functions are now runs in the Rust core. The package binary size went down to 4.3MB while in full state the memory consumption is 11MB. Not bad from 50MB. The next week's focus will be the cluster migration finalization, which also reduce the python code size. Now the whole server runs with about 30 LoC - All the functions comes from the Rust core. It provides a better memory safety, and performance. The second archivement on the week, is the tweaking on the inference speed on our MacStudio environment what we use for local inference. 67token/s with consumer grade power consumption is really good archivement while running the 120B parameter GPT-OSS model. For the Qwen3-Coding-32B we reached over 90 token/s. The last topic is the WebAssembly development framework which will hold all the previously developed javascript functions. This way the code is more protected, and more performant. The PoC project worked as expected, so we can migrate the functions when we reach the UX development phase.
To view or add a comment, sign in
-
-
Let's start the new week, with some of the important results of the past week - for those who missed the engineering updates. One is regarding AI inference. Most people/company just run inference without making efforts on the inference tuning, however, with some adjustments the inference speed can be multiplied. For those who have hardware engineering capabilities (that's where the real tech companies starts) - they can even utilize specialized hardware for such efforts. Like building KV cache from FPGA cards with ultralow latency. Using just dumb way hardware are not either sustainable and efficient. The second point which important for the audience attention is the WebASM developments, where both security and performance gains possible for browser base applications. Like we will transform our original javascript functions to secure rust based package, to ensure the next level of safety in our platform. Last but not least, the importance of lean mindset in the development, by optimizing the resources not only make our application more performing, but also create better scalability. That's why we always put efforts in these, to shorten and secure data pipelines. More exposure and handover with a data, gives better chance for unwanted exposure. #TechUpdate #WebASM #InferenceTuning #AIInference #LeanCoding https://lnkd.in/dfcszSrP
𝗘𝗡𝗚𝗜𝗡𝗘𝗘𝗥𝗜𝗡𝗚 𝗨𝗣𝗗𝗔𝗧𝗘: 𝟮𝟬𝟮𝟱𝗖𝗪𝟒𝟐 This week was mostly around migration, where the last pieces of the cluster service under progress. All the local and remote storage functions, web handling, WebSocket handling, ignition functions are now runs in the Rust core. The package binary size went down to 4.3MB while in full state the memory consumption is 11MB. Not bad from 50MB. The next week's focus will be the cluster migration finalization, which also reduce the python code size. Now the whole server runs with about 30 LoC - All the functions comes from the Rust core. It provides a better memory safety, and performance. The second archivement on the week, is the tweaking on the inference speed on our MacStudio environment what we use for local inference. 67token/s with consumer grade power consumption is really good archivement while running the 120B parameter GPT-OSS model. For the Qwen3-Coding-32B we reached over 90 token/s. The last topic is the WebAssembly development framework which will hold all the previously developed javascript functions. This way the code is more protected, and more performant. The PoC project worked as expected, so we can migrate the functions when we reach the UX development phase.
To view or add a comment, sign in
-
-
𝗖𝗼𝗺𝗽𝘂𝘁𝗲𝗿𝘀 𝗔𝗿𝗲𝗻'𝘁 𝗦𝗺𝗮𝗿𝘁 Most people think computers are 𝘪𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘵. As a developer, I know better. I was building a file sorter and writing tests when I noticed something odd: the tests kept failing on "tar.gz" files but worked perfectly for everything else. The problem? I was using file.suffix to get the extension. It worked great for single extensions like .pdf or .jpg, but completely failed for compound extensions like .tar.gz. Why? Because: 𝗳𝗶𝗹𝗲.𝘀𝘂𝗳𝗳𝗶𝘅 𝗿𝗲𝘁𝘂𝗿𝗻𝘀 .𝗴𝘇 (𝗼𝗻𝗹𝘆 𝘁𝗵𝗲 𝗹𝗮𝘀𝘁 𝗽𝗮𝗿𝘁) 𝗳𝗶𝗹𝗲.𝘀𝘂𝗳𝗳𝗶𝘅𝗲𝘀 𝗿𝗲𝘁𝘂𝗿𝗻𝘀 ['.𝘁𝗮𝗿', '.𝗴𝘇'] (𝗮𝗹𝗹 𝗽𝗮𝗿𝘁𝘀) The fix was simple: "".𝗷𝗼𝗶𝗻(𝗳𝗶𝗹𝗲.𝘀𝘂𝗳𝗳𝗶𝘅𝗲𝘀) 𝗕𝘂𝘁 𝗵𝗲𝗿𝗲'𝘀 𝘁𝗵𝗲 𝗿𝗲𝗮𝗹 𝗹𝗲𝘀𝘀𝗼𝗻: the computer did exactly what it was told to do. It wasn't wrong. I just failed to communicate my intent properly. Without those tests, this bug would have been invisible:files silently miscategorized, no errors thrown, just wrong behavior lurking in production. 𝗧𝗵𝗶𝘀 𝗶𝘀 𝘄𝗵𝘆 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: Not because computers make mistakes, but because we do. Computers are fast, precise, and literal. But smart? Never. They'll happily execute our misunderstandings at lightning speed. Our job is to close the gap between what we think we're asking for and what we're actually asking for. #SoftwareDevelopment #Testing #Python #CodingLessons #DeveloperLife #automation
To view or add a comment, sign in
-
A few weeks ago, a client complained that their dashboard took 10+ seconds to load data from an external API. The backend logs showed nothing unusual — requests completed successfully, just painfully slow. After profiling the system, the real issue became clear: The app was making multiple sequential API calls, waiting for each one to finish before starting the next. So even though each API took ~1s to respond, ten requests meant 10 seconds total delay. Here’s how we fixed it: 1. Introduced concurrency Used async requests (via Python’s aiohttp / Node’s Promise.all) to send all API calls at once. 2. Added caching layer Stored repeated responses temporarily to avoid redundant API calls. 3. Set timeouts + graceful fallback If one API slowed down, it wouldn’t block the entire page — users still got partial results fast. Result: Load time dropped from 10.4s → 1.3s, and user retention went up by 18%. Lesson: Performance isn’t always about the server or database. Sometimes, it’s about how and when you ask for data. #WebPerformance #APIOptimization #AsyncProgramming #SoftwareEngineering #BackendDevelopment #FullStackDevelopment #SystemDesign #TechLeadership
To view or add a comment, sign in
-
-
Here's my latest side project: a Flask app for time series forecasting using ARIMAX & SARIMAX! It lets you: >Choose your own stock. >Tweak model parameters easily. >Visualize forecasts instantly. >Give you forecasted close prices in a tabular form. >Pull historical stock prices with yfinance. Built with Flask, and statsmodels for backend and bootstrap for the frontend. Try it out yourself here: https://lnkd.in/g_evsFev Checkout the github repo: https://lnkd.in/gbEyPDNi #Python #MachineLearning #TimeSeriesForecasting #DataScience #Flask #Trading #Investing
To view or add a comment, sign in
-
Today, I solved an interesting LeetCode problem called “Max Sum of a Pair With Equal Sum of Digits(https://lnkd.in/gfE37qjM)". It looks simple — until you realize it’s actually two problems cleverly disguised as one. Let’s break it down: Problem 1 — Compute the Digit Sum For each number, we first need its digit sum — a classic number manipulation subproblem. 51 → 5 + 1 = 6 42 → 4 + 2 = 6 This step groups numbers by a common mathematical property — their digit sum. Problem 2 — Find the Maximum Pair Sum in Each Group Once we group numbers by their digit sum, the next challenge is to find two numbers with the same sum of digits that produce the largest total. This now becomes a hash map optimization problem: Use a map to store the largest number seen for each digit sum. For every new number, check if we already have one with the same digit sum. If yes → compute the potential pair sum → update the maximum. Approach Summary Digit Sum Calculation: O(logn) per number Hash Map Lookup: O(1) per operation Overall Time: O(n), efficient and clean Here is the solution Below #LeetCode #JavaScript #Coding
To view or add a comment, sign in
-
-
Flask: Production-Ready API in 6 Lines app = Flask(__name__) @app.route('/predict', methods=['POST']) @jwt_required() def predict(): return jsonify(model.predict(request.json['data'])) Add JWT + Gunicorn + Nginx = secure, scalable. Deployed fraud detection API serving 10K RPM. #Flask #Python #API #Backend
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development