Finally back after a long break Built a an async worker job Queue where a heavy HTTP request is processed in background so that users dont have to wait for one process to finish and can simultaneously work on multiple requests. Its possible due to async workers working in background which one request comes picks it up form the database(PostgreSQL or Redis etc) and works on the in background while users are shown pending status. I also added a retry logic with exponential back-off that means a failing request will be retried by workers after some exponential time by Max 3 times which even if still not completed is sent to dead letter queue whose error message could be viewed manually in database. Full Code:- https://lnkd.in/gCEV3C7j #Python #FastAPI #AsyncIO #BackendDevelopment #WebDevelopment
Async Worker Job Queue with Retry Logic in Python
More Relevant Posts
-
1.2s → 85ms. The LATERAL join 90% of devs have never written. Four nested Python loops. One slow endpoint. I replaced all of it with one Postgres LATERAL join. Most backend devs don't know it exists. Full breakdown (7 min read) → https://lnkd.in/gyPiyjsQ #PostgreSQL #Database #SQL #BackendEngineering #DataEngineering
To view or add a comment, sign in
-
-
myflames v1.2.0 — now with MariaDB support! myflames is an open-source tool that visualizes MySQL EXPLAIN ANALYZE output as interactive flame graphs, bar charts, treemaps, diagrams, and execution trees. No dependencies, pure Python. What's new in v1.2.0: → Full MariaDB 10.11 and 11.4 support → Auto-detects MySQL vs MariaDB JSON format — zero configuration → ANALYZE FORMAT=JSON, SHOW ANALYZE FOR, and SHOW EXPLAIN FOR → All 5 visualization types work with both databases → Works with any mariadb CLI flag combination (-e, -N, -r, -s) https://lnkd.in/d9yEY34y Enjoy! #mysql #mariadb #readyset #acedirector
To view or add a comment, sign in
-
-
Problems i solved over the past week. Part 1. THE CASE 🟡: JWTs are supposed to reduced the complexity and increase speed of recognising a user on each request, and reducing database transactions per request. I inject a logged in user to each endpoint route via a depency that fetches the user instance from my database after proper JWT validation. THE PROBLEM 🔴: I still have to hit my Postgres database on every request to get a user and check if their account still exists, if their account is still active, if their role is valid, for changes in their data, etc, it nearly defeats the purpose of JWTs. THE SOLUTION 🟢: I implemented caching using redis, not just storing user data in cache, but renewing and invalidating when necessary to avoid stale data that breaks intended business logic. This improves speed for user requests and strategically reduces the load on my Postgres database as the database is only hit for writes and occasional gets for user cache updates, and not on every single user request. How am i confident this is reliable? Simple, tests. Lots and lots of boring verbose tests. How would you go about this? please share #python #typescript #fastapi #react #fullstack #backend
To view or add a comment, sign in
-
Cache is faster than database… so why not store everything in cache? 🤔 I used to get this question many times, but today I came across a perfect example: Why not keep everything in your pocket instead of using a backpack? Because: - your pocket is small - things fall out - it’s a mess after a point - your pants may give up on life But yeah… it’s faster! That’s exactly what cache is. Fast, but limited. Convenient, but not reliable. So we use it for what matters most — not everything. #SystemDesign #backend #caching #redis #python
To view or add a comment, sign in
-
I'm often asked how to handle edge cases when building data layers with MongoDB and Python. Simple CRUD is great, but real-world apps need robust query patterns and clean architecture. Working in VS Code on this project, I focused on layering logic. Instead of calling the database directly from the application layer, I used a modular service pattern (like user_service.py calling db_utils.py). A few key practices I implemented: ✅ Robust Error Handling: Ensuring a clean return for cases like invalid ObjectIds, which prevents app crashes. ✅ Modular Query Logic: Abstracting queries into specific, reusable functions (e.g., get_users_by_college) makes the main logic much easier to read and test. ✅ Automated Postman-Free Testing: In my terminal, you can see I'm using curl and echo to script a "Full CRUD Test Cycle." This is a fast, reproducible way to verify APIs during development. What's your go-to pattern for structuring database interactions in your applications? Do you stick with raw queries, ORMs, or custom data access objects? Let me know in the comments! GitHub link - > https://lnkd.in/dASzkj7T #mongodb #python #development #dataservices #vscode #backend #programming #softwareengineering
To view or add a comment, sign in
-
-
I got tired of constantly copying and pasting similar code snippets, so I asked Gemini and Claude to help me organize everything—and ended up publishing it as a crate. For integration tests, you often just need a simple “connection” to Redis or PostgreSQL. But in practice, it gets tedious: isolating databases, setting up schemas, and repeating the same boilerplate over and over. It’s something I’ve implemented many times across Rust, Python, Go, and Java, using different frameworks and databases. This time, I decided to wrap it into a convenient helper for Rust. I hope it can be useful for the community! #rust #diesel #sqlx #redis #valkey #testcontainers #docker #tests Crate: https://lnkd.in/dX4Nn76a Repo: https://lnkd.in/d3cG2ZRY
To view or add a comment, sign in
-
I just published a piece on something I keep seeing in Python APIs: using SQLAlchemy by default — even when it’s not needed After working more directly with PostgreSQL, I started questioning this habit. Because the database is not just storage — it’s a core part of performance and system behavior. In many APIs, especially simple or performance-critical ones, I’ve found that: - ORM adds unnecessary abstraction - raw SQL gives better control over query shape - PostgreSQL features are easier to leverage directly and in some cases, it actually improves performance due to lower overhead So I wrote about: ->> when ORM makes sense ->> when it becomes overengineering ->> and why I prefer asyncpg + raw SQL in many cases Do you stick with ORM everywhere, or go raw SQL when performance matters? https://lnkd.in/dzZ7xvCS #python #postgresql #fastapi #backend #softwareengineering
To view or add a comment, sign in
-
🚨 Faced an interesting SQL issue recently while working with AWS Batch and Python. Queries started taking longer, and parallel jobs were getting blocked — turned out to be due to a small setting: autocommit = false in pymssql. Wrote a quick blog on how this caused table locking and how we fixed it 👇 https://lnkd.in/d2YTE7aP Would love to hear if anyone faced something similar! hashtag #SQLServer hashtag #Python hashtag #LearningInPublic
To view or add a comment, sign in
-
𝐆𝐨𝐢𝐧𝐠 𝐀𝐬𝐲𝐧𝐜 𝐰𝐢𝐭𝐡 𝐑𝐞𝐝𝐢𝐬 𝐒𝐭𝐫𝐞𝐚𝐦𝐬 Working on my queue processor, I moved the system to an asynchronous approach and integrated Redis Streams as the backend. What I implemented: • Refactored core logic using async/await • Used Redis Streams (XADD, XREADGROUP) for message handling • Built a processor for fetch → process → acknowledge flow • Added auto-creation of Streams and Consumer Groups • Exposed a FastAPI endpoint for publishing messages One challenge was ensuring messages are properly acknowledged so they are not lost during processing. It was good to see the end-to-end flow working as expected. Next step is to improve reliability with retry logic and dead-letter queues. GitHub: https://lnkd.in/gNAHVquX #python #redis #asyncio #systemdesign #fastapi #backendengineering
To view or add a comment, sign in
-
PydanTable 1.17.0 has been released, and MongoDB is now officially part of the story. This release introduces an optional MongoDB execution engine, allowing work to remain on the MongoDB database side when supported. This means you can materialize data only when it is actually needed in the application, rather than pulling full result sets into Python first. Additionally, this version adds integration with Beanie, a popular Python ODM (object-document mapper) for MongoDB built on Pydantic. If your application already models MongoDB documents with Beanie, PydanTable can seamlessly integrate with that layer, ensuring your document models and typed, table-shaped workflow remain aligned without the need for a parallel schema. For more details, check out the documentation and release notes: - PyPI: https://lnkd.in/ez4NZMjT - Documentation: https://lnkd.in/eV4RTqZQ - Repository: https://lnkd.in/eVpjrcRX #Python #Pydantic #MongoDB #DataEngineering #OpenSource
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development