𝗠𝗼𝘀𝘁 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗱𝗼𝗻’𝘁 𝗳𝗮𝗶𝗹 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝗰𝗼𝗱𝗲. 𝗧𝗵𝗲𝘆 𝗳𝗮𝗶𝗹 𝗯𝗲𝗰𝗮𝘂𝘀𝗲 𝗼𝗳 𝗵𝗼𝘄 𝘁𝗵𝗲𝘆 𝗵𝗮𝗻𝗱𝗹𝗲 𝗰𝗼𝗻𝗰𝘂𝗿𝗿𝗲𝗻𝗰𝘆. When building backend systems, we usually evolve like this: 👉 Single thread 👉 Multi-threading 👉 And now… Virtual Threads --------------------------------- 💡 Let’s simplify: 🧵 Thread (Single) • One task at a time • Simple, predictable ❌ Slow under load --------------------------------- 🧵🧵 Multi-thread (Platform Threads) • Parallel execution • Better performance ❌ Heavy memory ❌ Context switching cost ❌ Limited scalability --------------------------------- ⚡ Virtual Threads (Java 21) • Lightweight (JVM managed) • Thousands can run efficiently ✔ Great for I/O-heavy systems ✔ Simpler code (less async complexity) ✔ High scalability --------------------------------- 🔥 What actually changes? 👉 Same workload 👉 Completely different scalability --------------------------------- In real systems: • 1000 users → platform threads = resource pressure • 1000 users → virtual threads = manageable --------------------------------- ⚠️ Not a silver bullet ✔ Best for I/O-bound workloads ❌ Not ideal for CPU-heavy tasks --------------------------------- 🧠 Key Insight Concurrency model isn’t an implementation detail. It’s a scalability decision. --------------------------------- Curious: 👉 Have you tried virtual threads in production yet? #Java #VirtualThreads #Concurrency #BackendEngineering #SystemDesign #Scalability #SoftwareEngineering
Java Virtual Threads for Scalability
More Relevant Posts
-
🔐 Optimistic vs Pessimistic Locking - When to Use What? Concurrency control is a critical part of building reliable systems, especially in high-traffic, distributed applications. Two common strategies are optimistic locking and pessimistic locking each with its own trade-offs. 👉 Optimistic Locking Assumes conflicts are rare. Instead of locking data upfront, it allows multiple transactions to proceed and checks for conflicts before committing (usually via a version or timestamp). ✔️ High performance & scalability ✔️ No blocking of reads/writes ❗ Requires retry logic on conflict 👉 Pessimistic Locking Assumes conflicts are likely. It locks the data at the start of a transaction to prevent others from modifying it. ✔️ Strong consistency ✔️ No retry overhead ❗ Can cause blocking, deadlocks, and reduced throughput 💡 When to use what? Use optimistic locking in high-read, low-conflict systems (e.g., microservices, REST APIs) Use pessimistic locking when data integrity is critical and conflicts are frequent (e.g., financial transactions) 🚀 In modern cloud-native systems, optimistic locking is often preferred due to better scalability—but the right choice always depends on your use case. #SoftwareEngineering #Java #SpringBoot #Microservices #SystemDesign #BackendDevelopment
To view or add a comment, sign in
-
-
Why your backend suffers from race conditions (and you don't even see it)⚠️ Node.js feels "safe": single-threaded, async/await - everything seems predictable. But race conditions don't disappear. They just become less obvious and much harder to detect. 🕵️♂️ A typical example: two parallel requests read the same value from the database, both modify it, and write it back. The result? Lost updates. No errors, no crashes - just silently corrupted data. 🤫 The tricky part is that concurrency doesn't live in threads - it lives in access to shared resources: databases, caches, queues, even external APIs. Meanwhile, your code still looks "clean" and correct during review. ✅ Worse, these issues often don't show up locally. Everything works fine until real traffic hits. Under load, your system starts behaving inconsistently, and debugging becomes a nightmare. 😵💫 I've only caught these bugs in production - when thins started acting "randomly" and metrics didn't make sense. 📊 Solutions? Transactions, locks, idempotency, queues, and careful system design. But tools won't save you if you ignore the core issue. If you don't have a clear strategy for handling concurrency, chances are you already have race conditions - you just haven't seen them yet. 👀
To view or add a comment, sign in
-
-
Here’s the uncomfortable truth about concurrency bugs: The dangerous ones aren’t written by developers who don’t know the tools. They’re written by developers who think they do. A developer who knows nothing about threading writes single-threaded code. Slow. But correct. A developer who knows a little adds locks without a complete mental model. Concurrent. And silently wrong. 15 weeks. 15 blogs. Every Sunday. Built to get you to the third kind of developer. Blog 0 is live 👇 Subscribe: https://lnkd.in/ghXTFsE6
To view or add a comment, sign in
-
Stop building simple CRUDs, start solving real-world problems! I’m excited to share my latest backend project — GlobalPay, a production-minded payment engine built with Java 21 and Spring Boot 3.4. Moving money is never just about updating a database record. It’s about handling the "hard parts" of distributed systems. In this project, I focused on: 🔹 Concurrency Safety: Protecting balance integrity using Pessimistic Locking to prevent race conditions during simultaneous transfers. 🔹 Idempotency: Implementing a two-layer protection (Redis + Database) to ensure Exactly-Once processing and prevent double-spending. 🔹 Reliability: Achieving high test coverage (80% overall, 100% on business logic) using Testcontainers for real-world integration testing. 🔹 Observability: Setting up a full monitoring stack with Prometheus and Grafana to track JVM health and system performance. This project is a robust MVP payment core designed with a production-first mindset, solving challenges that real fintech teams face every day. Check out the full journey: 💻 GitHub (Code & Docs): https://lnkd.in/deMRijck 📽 YouTube (7 min. demo): https://lnkd.in/dxwkrzeW I’m always open to technical discussions, feedback, or networking with fellow engineers. Let’s connect! 🤝 #Java #SpringBoot #Backend #Fintech #SystemDesign #SoftwareEngineering #Redis #PostgreSQL #OpenSource
To view or add a comment, sign in
-
🏗️ I built 10+ microservices that deliberately never talk to each other. Sounds counterintuitive. Here's why it's the smartest thing I did at LiveFx Hub. When we started building the trading platform, the temptation was to have services share a database — it's faster to build, easier to query. We didn't. Here's what we did instead: → Database-per-service architecture Every service owns its data. No shared schema. No one service can corrupt another's state. → Redis Pub/Sub for communication Services emit events. Other services listen. Zero direct API chaining. → Fault isolation by design If the IB commission service goes down — trading keeps running. Users never notice. The result? ✅ ~70% reduction in system downtime ✅ Independent deployments with zero fear ✅ Each service scaled exactly as needed Most outages in distributed systems aren't technical failures. They're architecture failures. Design for independence. Build for resilience. 🔧 What's your approach to inter-service communication — REST, events, or something else? Drop it below 👇 #Microservices #SystemDesign #DistributedSystems #BackendEngineering #SoftwareArchitecture #Redis #Python #Fintech #TradingTech #BuildInPublic
To view or add a comment, sign in
-
-
🚀 **Mastering REST API Concurrency – A Must for Scalable Systems** Handling multiple requests at the same time isn’t just a backend concern — it’s a **core requirement for building reliable and high-performance APIs**. In real-world applications, concurrency can lead to serious challenges like: ⚠️ Lost updates ⚠️ Race conditions ⚠️ Data inconsistency ⚠️ System overload 💡 The solution? Smart concurrency control strategies: 🔹 **Optimistic Locking (ETag / If-Match)** – Prevent overwriting changes 🔹 **Pessimistic Locking** – Lock resources during updates 🔹 **Versioning** – Track changes with versions 🔹 **Idempotent APIs** – Ensure safe retries (PUT, DELETE) 📊 Also, understanding key HTTP status codes is crucial: ✔️ 200 OK – Success ✔️ 409 Conflict – Version mismatch ✔️ 412 Precondition Failed – ETag mismatch ✔️ 429 Too Many Requests – Rate limit exceeded ✔️ 503 Service Unavailable – Server overload ✅ **Best Practices to Follow:** • Implement ETag-based updates • Use proper version control • Design idempotent APIs • Apply rate limiting & throttling • Monitor and log concurrency issues 👉 Building APIs isn’t just about endpoints — it’s about **consistency, reliability, and performance at scale**. #RESTAPI #BackendDevelopment #SystemDesign #Microservices #Concurrency #SoftwareEngineering #APIDesign #TechLeadership #Java #python #Nodejs
To view or add a comment, sign in
-
-
Most of us build backend projects where the database is just… there. Call → query → response. But one question changed how I look at backend systems: •What actually happens when 100s of requests need the database at the same time? Opening a new connection per request feels simple — until you realize it’s one of the most expensive operations in a system. That’s where concepts like "connection pooling" come in. Not as a library feature, but as a design decision: • reuse instead of recreate • limit instead of overload • coordinate instead of collide I explored this by building a small "BANKING STYLE SYSTEM" where operations like deposit, withdrawal, and transfer run inside real transactions — while sharing a limited pool of connections under concurrent load. Thinking in terms of: • bounded resources • thread safety • wait vs fail strategies • transaction boundaries completely shifts how you design backend systems. It also explains why production systems rely on tools like HikariCP — not because they’re convenient, but because they solve hard problems around concurrency and resource management. Lately, I’ve been exploring these ideas by building small systems around them, and it made one thing clear: Good backend engineering is less about writing endpoints, and more about managing what happens under load. Curious — what backend concept changed the way you think about system design? GitHub - https://lnkd.in/gBU6dkwY #BackendDevelopment #SystemDesign #Concurrency #Java
To view or add a comment, sign in
-
-
While fixing a backend system recently, I noticed something interesting. An API that should’ve been fast was consistently slow. Instead of jumping to solutions, I started with logs. I broke down the response time and found: ~60% of the latency was coming from database queries That immediately told me: This isn’t a caching or scaling problem. It’s a query problem. So I dug deeper. Logged query execution time Ran EXPLAIN ANALYZE Found inefficient scans and missing indexes The database was doing way more work than needed. Fix: Optimized the query Added proper indexing Reduced unnecessary data fetching Result: 🚀 Significant drop in response time 🚀 Better system efficiency without adding extra infrastructure That experience reinforced something I now always follow: Don’t optimize blindly. Measure → Identify → Fix the real bottleneck. My approach now is simple: Analyze logs first Break latency into DB / API / Processing Fix the actual bottleneck ,not symptoms A lot of times, we jump to Redis, async, or scaling… But the real problem is often much simpler and deeper. I’ve broken down this entire process step-by-step in a short video 👇 🎥 https://lnkd.in/d2aaj64b Curious , how do you usually approach backend performance issues? #backend #systemdesign #softwareengineering #performance #backenddevelopment #programming #developers #coding #fastapi #scalability
How I Find Backend Bottlenecks (Real Optimization Process 🔥)
https://www.youtube.com/
To view or add a comment, sign in
-
Why idempotency saved us from duplicate financial transactions 💳 One thing you learn quickly in real-time payment systems: Retries are not optional. Duplicates are dangerous. Recently ran into a scenario where a payment API was being retried due to intermittent network timeouts. From the client side, it looked simple: “Request failed → retry” But on the backend, that retry could have resulted in duplicate transactions — which is unacceptable in financial systems. 🔍 What actually happens under the hood: 1️⃣ Client sends payment request 2️⃣ Backend successfully processes the payment 3️⃣ Response times out due to network issues 4️⃣ Client retries the same request 5️⃣ Backend receives it again 👉 Now the system cannot tell: Is this a new payment or a retry of the same one? Without protection → 💥 duplicate charge -------- 💡 What is idempotency? Idempotency means: “No matter how many times you send the same request, the result remains the same.” --- 🛠️ What we implemented: • Introduced idempotency keys (unique per request) • Stored request state in Redis (PROCESSING / SUCCESS / FAILED) • Checked existing state before processing • Returned cached response for duplicate retries --- 🔁 How it works in practice: • First request → processed normally • Retry with same key → no reprocessing • Same response returned 👉 Client thinks retry succeeded 👉 System avoids duplicate transaction --- 🚀 Real impact: • Prevented duplicate financial transactions • Improved reliability during retries • Ensured consistent system behavior --- ⚠️ The hidden complexity: Idempotency is NOT just “check if request exists” It involves: • Handling partial failures (what if success but state not saved?) • Managing TTL (how long to keep keys?) • Handling concurrency (same request at same time) • Maintaining consistency across services --- 🧠 Key takeaway: In distributed systems: Failures are guaranteed Retries are inevitable. So your system must be: 👉 Safe to retry --- 📌 My biggest learning: The difference between a working system and a production-ready system is how it behaves when things go wrong. --- Curious — how are you handling idempotency in your systems? #SystemDesign #DistributedSystems #BackendEngineering #Payments #Microservices #Java #SpringBoot #TechInsights
To view or add a comment, sign in
-
-
𝗥𝗮𝘁𝗲 𝗟𝗶𝗺𝗶𝘁𝗶𝗻𝗴 𝘀𝗼𝘂𝗻𝗱𝘀 𝘀𝗶𝗺𝗽𝗹𝗲 𝘂𝗻𝘁𝗶𝗹 - 𝘆𝗼𝘂 𝗵𝗮𝘃𝗲 𝟱𝟬𝟬 𝗻𝗼𝗱𝗲𝘀. 🌐 When you're building a side project, a simple middleware is enough to stop a bot. But when you're building for 𝗚𝗹𝗼𝗯𝗮𝗹 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝘆 𝗖𝗲𝗻𝘁𝗲𝗿 (𝗚𝗖𝗖) level scale, "simple" breaks. Here is how the architectural thinking evolves: 🌱 𝗝𝘂𝗻𝗶𝗼𝗿 𝗟𝗲𝘃𝗲𝗹 "𝘐'𝘭𝘭 𝘫𝘶𝘴𝘵 𝘢𝘥𝘥 𝘢 𝘤𝘰𝘶𝘯𝘵𝘦𝘳 𝘪𝘯 𝘮𝘦𝘮𝘰𝘳𝘺." (Result: Fails immediately. Each node has its own state; you have no global truth.) ⚖️ 𝗠𝗶𝗱-𝗟𝗲𝘃𝗲𝗹 "𝘐'𝘭𝘭 𝘶𝘴𝘦 𝘙𝘦𝘥𝘪𝘴 𝘸𝘪𝘵𝘩 𝘢 𝘣𝘢𝘴𝘪𝘤 𝘒𝘦𝘺-𝘝𝘢𝘭𝘶𝘦 𝘪𝘯𝘤𝘳𝘦𝘮𝘦𝘯𝘵." (Result: Better, but prone to Race Conditions. Two simultaneous requests can read the same counter before either updates it.) 🛠️ 𝗦𝘁𝗮𝗳𝗳 𝗟𝗲𝘃𝗲𝗹 "𝘞𝘦 𝘯𝘦𝘦𝘥 𝘢 𝘎𝘦𝘯𝘦𝘳𝘪𝘤 𝘊𝘦𝘭𝘭 𝘙𝘢𝘵𝘦 𝘈𝘭𝘨𝘰𝘳𝘪𝘵𝘩𝘮 (𝘎𝘊𝘙𝘈) 𝘰𝘳 𝘢 𝘓𝘦𝘢𝘬𝘺 𝘉𝘶𝘤𝘬𝘦𝘵 𝘪𝘮𝘱𝘭𝘦𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯 𝘶𝘴𝘪𝘯𝘨 𝘓𝘶𝘢 𝘴𝘤𝘳𝘪𝘱𝘵𝘴 𝘪𝘯 𝘙𝘦𝘥𝘪𝘴." 𝗪𝗵𝘆 𝘁𝗵𝗲 "𝗦𝘁𝗮𝗳𝗳" 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝘄𝗶𝗻𝘀: 🏆 By moving the logic into a Redis Lua script, you ensure that the check-and-update happens in a single atomic step. No race conditions. No over-limit leaks. Just precise, distributed control. 𝗧𝗵𝗲 𝗟𝗲𝘀𝘀𝗼𝗻: 💡 System design isn't about knowing the tool; it's about knowing where the tool's "guarantees" end and where your architecture must take over. 💬 𝗪𝗵𝗶𝗰𝗵 𝗼𝗻𝗲 𝗱𝗼 𝘆𝗼𝘂 𝗽𝗿𝗲𝗳𝗲𝗿 𝗳𝗼𝗿 𝗧𝗶𝗲𝗿-𝟭 𝗔𝗣𝗜𝘀? Are you in T𝘦𝘢𝘮 𝘛𝘰𝘬𝘦𝘯 𝘉𝘶𝘤𝘬𝘦𝘵 or in 𝘛𝘦𝘢𝘮 𝘍𝘪𝘹𝘦𝘥 𝘞𝘪𝘯𝘥𝘰𝘸? Let’s debate in the comments. 👇 #SystemDesign #Redis #SoftwareArchitecture #Scalability #BackendEngineering #GCC #PuneTech
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
One thing I find interesting: Virtual threads make synchronous code feel scalable again. That’s a big shift compared to callback-heavy or reactive approaches.