Spring Developers Adopt @TransactionalEventListener for Reliable Post-Commit Actions 📌 Spring developers are now embracing @TransactionalEventListener to ensure critical post-commit actions-like emails or cache updates-run reliably only after database transactions succeed. This prevents inconsistent states and enhances data integrity in complex workflows. By decoupling side effects from core logic, teams gain both reliability and performance, especially when paired with async processing for long-running tasks. 🔗 Read more: https://lnkd.in/dUFppiHb #Springframework #Databasetransaction #Postcommitaction
Spring Developers Adopt TransactionalEventListener for Reliable Post-Commit Actions
More Relevant Posts
-
⚠️ Day 13 — The Lost Update Bug Two users update the same data… One update silently disappears. No errors. No crashes. Still wrong. --- ### The Setup Two requests hit your API at the same time: 👉 User A updates profile 👉 User B updates profile Both read the same old value. --- ### The Problem Flow looks like this: 1. Read current data 2. Modify 3. Save But: - A reads value = 100 - B reads value = 100 - A updates → 120 - B updates → 130 👉 Final value = 130 👉 A’s update is lost ❌ --- ### ❌ The Mistake Assuming single-user access in a multi-user system. 👉 Concurrency issues are invisible… until production. --- ### ✅ The Fix: Optimistic Locking Use versioning to detect conflicts. 👉 Add version field 👉 Check before update If version changed → retry --- ### 🔥 Production-Grade Approach - Use @Version (JPA) - Implement retry mechanism - Use pessimistic locking (if needed) - Design APIs to handle conflicts gracefully --- ### 🧠 Senior Insight Concurrency bugs don’t fail loudly. They corrupt data silently. --- ### 🎯 The Lesson In multi-user systems: 👉 Conflicts are inevitable 👉 Data loss is preventable --- If multiple users can update the same data… 👉 You need concurrency control. --- #BackendDevelopment #Java #SpringBoot #Microservices #SystemDesign #Concurrency #DistributedSystems #Database
To view or add a comment, sign in
-
-
Spent 25 minutes wondering why my @Transactional was not rolling back on exception The service method threw an exception but the data was still saved to the database Checked the logs and the transaction was committing anyway The problem was I was catching the exception inside the method @Transactional public void saveUser(User user) { try { userRepository save(user); throw new RuntimeException("Something went wrong"); } catch (Exception e) { log.error("Error saving user", e); } } Spring only rolls back when the exception propagates out of the method If you catch it inside, Spring thinks everything is fine and commits The fix was letting the exception propagate or using TransactionAspectSupport to mark rollback manually @Transactional public void saveUser(User user) { try { userRepository save(user); throw new RuntimeException("Something went wrong"); } catch (Exception e) { TransactionAspectSupport.currentTransactionStatus().setRollbackOnly(); log.error("Error saving user", e); } } Small detail but can cause serious data integrity issues What transaction gotcha has caught you before #Java #SpringBoot #Transactions #Debugging #BackendDevelopment
To view or add a comment, sign in
-
🚨 Two Users Paid… But Only One Order Was Created Sounds like a bug, right? It wasn’t. 💥 The situation: Two users hit the same endpoint at almost the same time. Both: Checked product availability Saw “In Stock” Placed the order ❌ What went wrong? 👉 Both transactions read the same data 👉 Both assumed it was safe to proceed Result: 💥 Inconsistent data ⚡ The real problem: Not the API Not the database 👉 Isolation level 🔹 What I learned: Different isolation levels change how transactions behave: Read Uncommitted → dirty reads Read Committed → safer, but still issues Repeatable Read → prevents inconsistent reads Serializable → strict, but slower 🧠 The key realization: 👉 “Working logic” is not enough in concurrent systems You need to control: how data is read when it is locked how conflicts are handled 💡 What I’m focusing on: Designing backend systems that handle 👉 concurrency, consistency, and real-world race conditions 👉 Would you prioritize strict consistency or performance under high load? #BackendDeveloper #JavaDeveloper #SpringBoot #Database #Transactions #SystemDesign #SoftwareEngineering #TechHiring
To view or add a comment, sign in
-
-
🚀 New Video: Why @Transactional is Important in Spring Boot What happens if: ✔ Employee is saved ❌ IdCard fails 👉 You get inconsistent data This is where @Transactional saves you. 💡 Simple idea: Either everything succeeds… or nothing is saved. In this video, I show: ✔ Real problem (partial data save) ✔ How rollback works ✔ Why transactions are critical in real systems 🎥 Watch here: https://lnkd.in/dN3Duxnj #SpringBoot #JPA #Java #BackendDevelopment #Hibernate
Why @Transactional is Important in Spring Boot? (Fix Data Inconsistency)
https://www.youtube.com/
To view or add a comment, sign in
-
Saga eliminated the blocking problem. But a different failure mode remained — one that is harder to detect and equally damaging. ⚠️ The Dual-Write Problem Two operations. No atomic guarantee between them. 🔹 Database write succeeds — yard entry and shipment updated. 🔹 Event publish fails — Kafka unavailable, network timeout. 🔹 System state says the order exists. 🔹 Billing, tracking, analytics — never notified. No exception. No alert. Silent inconsistency at scale. 🔥 Real Production Scenario During a Kafka broker restart in yard services: 🔹 Shipment updates persisting to DB successfully. 🔹 Event publishing silently failing throughout. 🔹 30 minutes of yard activity — zero events delivered downstream. 🔹 Billing and tracking operating on completely stale state. 🔹 Manual reconciliation required — hours of engineering effort lost. The fix was not retry logic. It was eliminating the dual write entirely. 🟡 The Solution — Outbox Pattern Treat the event as part of the database transaction — not a separate operation. 🔹 Yard entry + shipment update + event written to OUTBOX — one transaction. 🔹 Background processor reads OUTBOX and publishes to Kafka. 🔹 If transaction commits — event is guaranteed to exist. 🔹 If transaction rolls back — event never existed. No dual write. No silent data loss. No distributed transaction needed. 💡 Key Takeaways 👉 Dual writes are a hidden reliability risk in every event-driven system. 👉 Outbox makes the event part of the transaction — atomicity guaranteed. 👉 Kafka unavailability does not cause data loss — events wait safely in OUTBOX. 👉 Consumers must be idempotent — at-least-once delivery is the contract. 👉 Slight publishing delay is acceptable — losing an event is not. Saga handles business flow. Outbox handles event reliability. Together — they address two distinct failure modes that no single pattern solves alone. Next → Final comparison: 2PC vs Saga vs Outbox — when to use which #SpringBoot #Microservices #Kafka #SystemDesign #BackendDevelopment #DistributedSystems #Java #SoftwareEngineering
To view or add a comment, sign in
-
-
One of the most overlooked performance killers in backend systems: Excessive Logging Many applications have clean architecture, optimized queries, and scalable infrastructure — yet still suffer from performance loss because of excessive logging in frequently executed flows. Common examples: • Logging inside loops processing thousands of records • Debug logs with expensive string construction • Serializing large objects only for logging • Writing too many synchronous logs under load Simple view: Request Processing Time Business Logic = 120 ms Database = 80 ms Logging Overhead = 95 ms Total = 295 ms Better approach: • Use parameterized logging (log.info("User {}", id)) • Avoid logs inside heavy loops • Use async logging where appropriate • Keep DEBUG logs disabled in production • Log signals, not noise Lesson: Sometimes the system is slow not because of the database or business logic — but because we are logging too much. Good logging helps production. Bad logging becomes production load. #Java #SpringBoot #BackendDevelopment #Performance #Logging #SeniorDeveloper #SoftwareEngineering
To view or add a comment, sign in
-
-
I didn’t expect duplicate data to become this tricky. Recently, while working on a backend feature, I noticed something off, the same data was getting stored multiple times in the database. When I tried to fetch it, I was getting duplicate records. At first, I thought it was just a one-time issue. But after checking further, it turned out to be happening consistently in certain cases. The root cause was multiple requests hitting the same flow, and there were no proper checks or validations to prevent duplicate inserts. To fix this, I made a few changes: - Added validation before inserting data - Introduced unique constraints at the database level - Handled edge cases where repeated requests could happen After that, the duplicates stopped, and the data became more reliable. It was a good reminder for me: Relying only on application logic is not enough. Both validation and the database should enforce rules where it matters. Sometimes, clean data is not just about writing correct code. It’s about designing the system to prevent mistakes. #Java #BackendDevelopment #Database #SystemDesign #SpringBoot #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Excited to share my latest project: Fraud Transaction Alert & Monitoring System! I’ve been working on a full-stack solution to tackle real-time financial security. This project focuses on detecting and flagging suspicious transactions to keep digital payments safer. Key Technical Highlights: 🔹 Backend: Built with Java and Spring Boot using a scalable Service-ServiceImpl architecture. 🔹 Data Handling: Implemented DTOs (Data Transfer Objects) for secure and efficient data mapping. 🔹 Database: Managed with MySQL and Spring Data JPA for seamless persistence. 🔹 API Testing: Verified all endpoints thoroughly using Postman. This project was a great way to deepen my understanding of backend logic and REST API design. Check out the full code and documentation on my GitHub! Project Link: https://lnkd.in/gxqbr5Cg #Java #SpringBoot #BackendDevelopment #FullStack #SoftwareEngineering #ProjectShowcase #LearningByDoing
To view or add a comment, sign in
-
-
1. ThreadLocal: The "Private Locker" Strategy When you use ThreadLocal, each thread gets its own independent copy of a variable. There is no shared data, so there is no contention. The Use Case: Imagine handling multiple money transfer requests. Request 1: Customer C101, Txn ID: TXN1001 Request 2: Customer C202, Txn ID: TXN2001 We use ThreadLocal to store the Transaction ID so that every log or service call within that thread knows which transaction it’s working on without passing it as a method parameter everywhere. public class RequestContext { private static ThreadLocal<String> txnId = new ThreadLocal<>(); public static void setTxnId(String id) { txnId.set(id); } public static String getTxnId() { return txnId.get(); } public static void clear() { txnId.remove(); } // Always clean up! } Why not synchronize here? Because Thread 1 doesn't care about Thread 2's ID. We need Isolation, not locking. 2. Synchronization: The "Gatekeeper" Strategy We use synchronized when threads must access the exact same piece of data (like a bank balance). If two threads try to debit the same account at the exact same time, you’ll end up with incorrect data without a lock. public synchronized void debit(int amount) { if (balance >= amount) { balance -= amount; } } Why not use ThreadLocal here? If each thread had its own "copy" of the balance, the actual account would never be updated globally. We need Consistency, which requires a lock. Key Takeaway: Use ThreadLocal when you want to avoid synchronization overhead for data that is specific to a thread's execution context (e.g., User IDs, DB Connections, Transaction IDs). Use synchronized when threads must modify the same shared resource and you need to ensure data integrity. #Java #BackendDevelopment #SoftwareEngineering #MultiThreading #Concurrency #JavaPerformance #CodingTips #Programming #SystemDesign
To view or add a comment, sign in
-
3 AM. Production is burning. That one query everyone ignored is now taking 47 seconds and our checkout flow is dead. Executive wants answers. Users are complaining. I'm the one holding the pager. This exact scenario happened to me twice at two different companies. Both times, it was preventable. Both times, these 15 patterns saved us. 47 seconds became 12.7 seconds. 73% improvement. The brutal truth? Most of these optimizations should have been caught in code review. But when you're shipping fast and technical debt is piling up, query performance gets pushed to 'later.' Save this list. You'll need it when your turn comes. Because it will come. Every engineer gets their 3 AM query debugging baptism eventually. #viral #trending #trend #sql #database #performance #optimization #engineering #debugging #backend #postgresql #mysql #queryoptimization #databasetuning #softwareengineering #production #scalability #techdebt
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development