Problems i solved over the past week. Part 1. THE CASE 🟡: JWTs are supposed to reduced the complexity and increase speed of recognising a user on each request, and reducing database transactions per request. I inject a logged in user to each endpoint route via a depency that fetches the user instance from my database after proper JWT validation. THE PROBLEM 🔴: I still have to hit my Postgres database on every request to get a user and check if their account still exists, if their account is still active, if their role is valid, for changes in their data, etc, it nearly defeats the purpose of JWTs. THE SOLUTION 🟢: I implemented caching using redis, not just storing user data in cache, but renewing and invalidating when necessary to avoid stale data that breaks intended business logic. This improves speed for user requests and strategically reduces the load on my Postgres database as the database is only hit for writes and occasional gets for user cache updates, and not on every single user request. How am i confident this is reliable? Simple, tests. Lots and lots of boring verbose tests. How would you go about this? please share #python #typescript #fastapi #react #fullstack #backend
Solving JWT Complexity with Redis Caching
More Relevant Posts
-
𝐆𝐨𝐢𝐧𝐠 𝐀𝐬𝐲𝐧𝐜 𝐰𝐢𝐭𝐡 𝐑𝐞𝐝𝐢𝐬 𝐒𝐭𝐫𝐞𝐚𝐦𝐬 Working on my queue processor, I moved the system to an asynchronous approach and integrated Redis Streams as the backend. What I implemented: • Refactored core logic using async/await • Used Redis Streams (XADD, XREADGROUP) for message handling • Built a processor for fetch → process → acknowledge flow • Added auto-creation of Streams and Consumer Groups • Exposed a FastAPI endpoint for publishing messages One challenge was ensuring messages are properly acknowledged so they are not lost during processing. It was good to see the end-to-end flow working as expected. Next step is to improve reliability with retry logic and dead-letter queues. GitHub: https://lnkd.in/gNAHVquX #python #redis #asyncio #systemdesign #fastapi #backendengineering
To view or add a comment, sign in
-
Finally back after a long break Built a an async worker job Queue where a heavy HTTP request is processed in background so that users dont have to wait for one process to finish and can simultaneously work on multiple requests. Its possible due to async workers working in background which one request comes picks it up form the database(PostgreSQL or Redis etc) and works on the in background while users are shown pending status. I also added a retry logic with exponential back-off that means a failing request will be retried by workers after some exponential time by Max 3 times which even if still not completed is sent to dead letter queue whose error message could be viewed manually in database. Full Code:- https://lnkd.in/gCEV3C7j #Python #FastAPI #AsyncIO #BackendDevelopment #WebDevelopment
To view or add a comment, sign in
-
-
🚀 Beyond the Cache: Building a Durable Redis Clone from Scratch I recently challenged myself to go "under the hood" of one of the most popular databases in the world. I’m excited to share that I’ve successfully built a Redis-compatible in-memory database from scratch using Python! While many use Redis as a simple black box, building it taught me the intricate balance between high-speed volatile storage and reliable disk persistence. 🛠️ The Architecture Breakdown I designed the system around a single-threaded event loop to handle concurrent client connections without the overhead of heavy threading. Here’s what’s happening inside: 🔹 Networking & Protocol Using Python’s socket and select modules, I implemented a Client Connection Handler that multiplexes requests from multiple telnet sessions over Port 6379. 🔹 Command Processing A central Command Handler validates and routes commands, ensuring they follow the logic expected by a standard Redis client. 🔹 Intelligent Storage I built a Data Store that manages a Key-Value engine alongside an Expiration Manager. It uses a hybrid strategy (Lazy + Active expiration) to ensure expired keys don't sit in memory forever. 🔹 Persistence Manager Implemented an AOF (Append-Only File) mechanism with configurable fsync policies (always, everysec, no) and background AOF rewriting to ensure data survives a crash. 💡 Key Takeaway: Building this from the ground up gave me a deep appreciation for non-blocking I/O and the complexity of ensuring data atomicity. 📂 Check out the project here: https://lnkd.in/gvbUjcji #SoftwareEngineering #Redis #Python #BackendDevelopment #SystemsDesign #DatabaseInternal #LearningByBuilding #Persistence
To view or add a comment, sign in
-
-
🗓️ Release Notes — April 27, 2026 🔎 Span attribute filtering across the stack Pass attribute filters from the Python client, TypeScript client, REST API, or CLI. Type-aware (int/float/bool/str match their stored types). Filters are ANDed together. https://lnkd.in/ecJ-PQK9 🔐 Secrets settings page Admins can add, replace, delete, and search encrypted LLM provider credentials (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) directly in the UI—no REST calls. https://lnkd.in/eynXBPT2 🧪 Claude Opus 4.7 in the Playground https://lnkd.in/gwuzYJTk 🧬 `trace_id` in experiment evaluators Add a `trace_id` kwarg to any evaluator and Phoenix passes the originating trace ID for each run. Works sync/async, function- or class-based. Useful for trajectory evals. https://lnkd.in/e6_resgS ☁️ Azure Managed Identity for PostgreSQL Connect Phoenix to Azure Database for PostgreSQL with Entra managed identity—no static DB password required. https://lnkd.in/euepx9-9 📝 CLI span notes Add notes via `px span add-note <span-id> --text "..."`, and include notes with `--include-notes` on `px span list` / `px trace get`. https://lnkd.in/euzV22dy 📌 Full release notes https://lnkd.in/eFvGJ_Cy
To view or add a comment, sign in
-
⚡ Connection Pooling in FastAPI with PostgreSQL (Why it matters) When I started building APIs with FastAPI + PostgreSQL, I made a common mistake 👇 👉 Opening a new database connection for every request It worked… until traffic increased 😅 ❌ Problem: Too many open connections Slower response times Database overload 💡 Solution: Connection Pooling Instead of creating new connections every time, we reuse a pool of existing connections. ✅ Benefits: ✔ Faster API responses ✔ Better resource management ✔ Handles high traffic efficiently 🔧 Example (SQLAlchemy): from sqlalchemy import create_engine engine = create_engine( "postgresql://user:password@localhost/db", pool_size=10, max_overflow=20, pool_timeout=30 ) 💡 What I learned: If you're building production APIs with FastAPI, connection pooling is not optional — it's essential. 🚀 Next step: Combining this with async DB handling for even better performance #FastAPI #PostgreSQL #Backend #Python #APIs #WebDevelopment
To view or add a comment, sign in
-
-
If all you want is a working Binance DepthCache, many setups are either too fragile, too limited, or too much work to get running. That was one of the reasons I built UBDCC. The idea was simple: - install it fast - start it locally - create correctly synchronized DepthCaches - replicate them for failover if needed - query them over REST from whatever stack you use - use the dashboard as an additional operational interface, not as a requirement I wrote a practical quickstart here: https://lnkd.in/d3Q57ypu #Binance #Python #AlgoTrading #SystemDesign #OpenSource
To view or add a comment, sign in
-
Part 1 was about the infra. This is Part 2 - what I learned I learned once the agents were actually running. Honestly, the hardest bugs weren't in the model. They were in the plumbing around it. Early on I had agents passing messages to each other. By the time a result reached the final node, nobody could tell where it came from or why. I removed that out and replaced it with a single shared state object, a TypedDict that every agent reads from and writes to. That one change made debugging go from impossible to just hard. Memory was harder than I expected. I assumed I could just stuff everything into the context window and call it done. I ended up with three layers: in-context for the current task, Redis for session state that needed to survive across turns, and a vector DB for long-term retrieval. Each agent has a router that decides which layer to hit. I also started treating prompts like code. Every agent has its own system prompt, versioned in Git, reviewed in PRs, tested before deploy. A prompt is just another file. I don't know why it took me this long to think about it that way. The last thing and maybe the most underrated is the Postgres checkpointer. When an agent workflow fails at step 14 of 20, it doesn't restart from zero. It picks up at step 14. That alone has saved me more times than I can count. If you want to talk through the architecture DMs are open. #AgenticAI #LangGraph #Python #AWS #AIEngineering #MLOps #AIEngineering #SystemDesign #Terraform #Pinecone #Redis #LangGraph #AgenticAI #LLMOps #RAG
To view or add a comment, sign in
-
-
I hid a Lua script inside a DLL. Here's why. My distributed rate limiter needed a Lua script to run atomic operations in Redis. The naive approach: hardcode it as a C# string literal. private const string LuaScript = @" local current = redis.call('GET', KEYS[1]) ... "; It works. But it's terrible to maintain. No syntax highlighting. No IDE support. No separate version history. One big ugly string sitting in the middle of your C# class. So I did something different. I created a separate sliding-window.lua file. Proper syntax highlighting in the IDE. Its own version history in Git. Readable, editable, testable in isolation. Then I embedded it directly into the DLL as an assembly resource. var assembly = Assembly.GetExecutingAssembly(); var resourceName = "RateLimiter.Redis.sliding-window.lua"; using var stream = assembly.GetManifestResourceStream(resourceName); using var reader = new StreamReader(stream); _luaScript = reader.ReadToEnd(); The script is read once in the constructor. Cached as a string for the lifetime of the RedisRateLimiter instance. Zero file dependencies at runtime. Zero performance overhead after startup. The result: → Readable .lua file during development → Proper IDE support and syntax highlighting → Separate Git history for the script → Compiled into the assembly at build time → No loose files to manage at runtime → No file path issues across environments Embedded resources are one of the most underused patterns in .NET. When you have a file that's part of your logic — not configuration — embedding it in the assembly is the right call. Have you used embedded resources in your projects? What for? 👇 Part 7 of my rate limiter build series — follow for more. #dotnet #csharp #redis #dotnetdeveloper #backend #aspnetcore #softwaredevelopment
To view or add a comment, sign in
-
-
Just implemented a Celery-based background processing layer for a Django real-time messaging system. This layer handles critical async operations that shouldn’t block user requests: • Syncing unread message counters from Redis → PostgreSQL • Ensuring message state durability beyond in-memory cache • Retry-safe background execution with Celery task retries • Foundation for offline message delivery (push notifications) • Cleanup hooks for stale real-time states like “typing…” indicators The key design idea: Redis handles speed, Celery ensures reliability, and the database remains the source of truth. This separation is what makes real-time systems scalable without losing consistency under load. #Django #Celery #BackendEngineering #SystemDesign #Python #SoftwareEngineering #Scalability
To view or add a comment, sign in
-
-
I reduced my API response time from 2.3s to 140ms. No Redis. No CDN. No caching layer. Just 4 changes to my Django REST Framework setup that most tutorials never mention. N+1 queries everywhere. My serializer accessed post.author.name on every row. 100 posts = 101 database queries. One select_related('author') brought it down to 1. Response time: 2.3s to 800ms instantly. Using ModelSerializer for read endpoints. ModelSerializer builds fields dynamically on every request. It's up to 377x slower than raw Python dicts. Switched read-only endpoints to serializers.Serializer with explicit fields. Another 40% gone. No pagination on list endpoints. Returning the entire table. 10,000 rows. Every request. Added CursorPagination, constant-time queries regardless of dataset size. OFFSET-based pagination breaks at high page numbers. Cursor doesn't. Fetching fields I never used. Serializer returned 15 fields. Frontend used 6. Added .only() and trimmed the serializer. 2.3s to 140ms. Same server. Same database. Same $12/month VPS. The bottleneck was never my infrastructure. It was my code. Run queryset.explain(analyze=True) on your slowest endpoint. You'll probably find the same mistakes. Which of these have you tried? #Django #Python #API #WebPerformance #BuildInPublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development