𝗗𝗲𝗲𝗽-𝗱𝗶𝘃𝗶𝗻𝗴 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵 𝗖𝗼𝗿𝗲: 𝗔𝗱𝗱𝗿𝗲𝘀𝘀𝗶𝗻𝗴 𝗙𝗶𝗲𝗹𝗱 𝗖𝗮𝗽 𝗜𝗻𝗰𝗼𝗻𝘀𝗶𝘀𝘁𝗲𝗻𝗰𝗶𝗲𝘀 I’ve just submitted a Pull Request (#146105) to 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵 to address a nuanced bug(#109797) in the 𝗙𝗶𝗲𝗹𝗱 𝗖𝗮𝗽𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗔𝗣𝗜. 𝗧𝗵𝗲 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: I’m proposing an update to the mapping coordination logic to strictly enforce type parameters. The goal is to ensure the API response is atomic—if you ask for a keyword, you get only keywords, with no leaky parent objects. 𝗧𝗵𝗲 𝗙𝗶𝘅 (𝗖𝘂𝗿𝗿𝗲𝗻𝘁𝗹𝘆 𝘂𝗻𝗱𝗲𝗿 𝗿𝗲𝘃𝗶𝗲𝘄): I’m proposing an update to the coordination logic. The goal is to ensure that when an alias points to multiple indices, the field type remains consistent and "unmapped" states from one index don't overshadow valid mappings in another. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗶𝘀 𝗮 𝗳𝘂𝗻 𝗰𝗵𝗮𝗹𝗹𝗲𝗻𝗴𝗲: 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗲𝗱 𝗖𝗼𝗼𝗿𝗱𝗶𝗻𝗮𝘁𝗶𝗼𝗻: Merging responses from multiple shards requires absolute precision. 𝗦𝗰𝗮𝗹𝗲: In a system used by millions, a "small" API inconsistency can have massive ripple effects on data integrity. It’s been a great experience digging into the TransportFieldCapsAction and seeing how the Elastic team manages such a complex Java codebase. Looking forward to the review process! 𝗖𝗵𝗲𝗰𝗸 𝗼𝘂𝘁 𝘁𝗵𝗲 𝗣𝗥 𝗵𝗲𝗿𝗲: https://lnkd.in/g2AnUPH6 Special thanks to the Elastic team for the great codebase. #Java #Elasticsearch #OpenSource #Backend #DistributedSystems #SoftwareEngineering #BuildInPublic
Elasticsearch Field Cap Inconsistency Fix PR
More Relevant Posts
-
🚨 𝐓𝐡𝐞 𝐍+1 𝐐𝐮𝐞𝐫𝐲 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 — 𝐓𝐡𝐞 𝐒𝐢𝐥𝐞𝐧𝐭 𝐊𝐢𝐥𝐥𝐞𝐫 𝐨𝐟 𝐁𝐚𝐜𝐤𝐞𝐧𝐝 𝐏𝐞𝐫𝐟𝐨𝐫𝐦𝐚𝐧𝐜𝐞 Most developers don’t notice it… until production latency explodes. 👉 𝐖𝐡𝐚𝐭 𝐢𝐬 𝐢𝐭? You run 1 query to fetch data… Then N additional queries inside a loop to fetch related data. Total queries = 1 + N 💥 𝐖𝐡𝐲 𝐢𝐭’𝐬 𝐝𝐚𝐧𝐠𝐞𝐫𝐨𝐮𝐬 • Latency grows linearly with data size • DB connections get exhausted • Throughput drops under load • Becomes a major bottleneck at scale 🧠 𝐑𝐞𝐚𝐥 𝐞𝐱𝐚𝐦𝐩𝐥𝐞 Fetching 100 users → triggers 101 queries (1 for users + 100 for their orders) Sounds small… until traffic hits. ⚠️ 𝐑𝐨𝐨𝐭 𝐜𝐚𝐮𝐬𝐞 Lazy loading in ORMs (like Hibernate) Accessing relations inside loops without thinking about query execution ✅ 𝐇𝐨𝐰 𝐭𝐨 𝐟𝐢𝐱 𝐢𝐭 • Use JOIN FETCH (fetch in one query) • Use batch fetching (IN queries) • Prefer DTO projections for heavy reads • Monitor queries using logs / tracing tools 🚀 𝐒𝐭𝐚𝐟𝐟-𝐥𝐞𝐯𝐞𝐥 𝐢𝐧𝐬𝐢𝐠𝐡𝐭 N+1 is not just a DB issue. It appears in distributed systems too: API Gateway → 1 service call Then → N downstream service calls Same problem. Bigger impact. 💡 𝐓𝐚𝐤𝐞𝐚𝐰𝐚𝐲 𝐍+1 𝐢𝐬 𝐚 𝐝𝐞𝐬𝐢𝐠𝐧 𝐩𝐫𝐨𝐛𝐥𝐞𝐦, 𝐧𝐨𝐭 𝐣𝐮𝐬𝐭 𝐚 𝐪𝐮𝐞𝐫𝐲 𝐩𝐫𝐨𝐛𝐥𝐞𝐦. If you don’t control your data access pattern, your system will control your latency. #SystemDesign #Backend #Performance #Java #SpringBoot #Databases #Scalability #Tech
To view or add a comment, sign in
-
-
Here's a quick look at some open source data compression work out of Dynatrace Research. This post provides a lucid overview of FM-Index, a data structure that combines powerful compression with fast search, and introduces Dynatrace's open source Java implementation of it. https://lnkd.in/gZB4WeUb
To view or add a comment, sign in
-
🚀 Deep Internal Flow of a REST API Call in Spring Boot 🧭 1. Entry Point — The Gatekeeper DispatcherServlet is the front controller. Every HTTP request must pass through this single door. FLOW: Client → Tomcat (Embedded Server) → DispatcherServlet 🗺️ 2. Handler Mapping — Finding the Target DispatcherServlet asks: “Who can handle this request?” It consults: * RequestMappingHandlerMapping This scans: * @RestController * @RequestMapping FLOW : DispatcherServlet → HandlerMapping → Controller Method Found ⚙️ 3. Handler Adapter — Executing the Method Once the method is found, Spring doesn’t call it directly. It uses: * RequestMappingHandlerAdapter Why? Because it handles: * Parameter binding * Validation * Conversion FLOW : HandlerMapping → HandlerAdapter → Controller Method Invocation 🧭 4. Request Flow( Forward ): Controller -> Service Layer (buisiness Logic) -> Repository Layer -> DataBase 🔄 5. Response Processing — The Return Journey Now the response travels back upward: Repository → Service → Controller → DispatcherServlet -> Tomcat -> Client. ———————————————— ⚡ Hidden Magic (Senior-Level Insights) 🧵 Thread Handling * Each request runs on a separate thread from Tomcat’s pool 🔒 Transaction Management * Managed via @Transactional * Proxy-based AOP behind the scenes 🎯 Dependency Injection * Beans wired by Spring IoC container 🧠 AOP (Cross-Cutting) * Logging, security, transactions wrapped around methods ⚡ Performance Layers * Caching (Spring Cache) * Connection pooling (HikariCP) ———————————————— 🧠 The Real Insight At junior level i thought: 👉 “API call hits controller” At senior level i observe: 👉 “A chain of abstractions collaborates through well-defined contracts under the orchestration of DispatcherServlet” #Java #SpringBoot #RestApi #FullStack #Developer #AI #ML #Foundations #Security
To view or add a comment, sign in
-
-
Another day, another `StaleObjectStateException`. But this time… let’s look at it from an Architect’s lens. In distributed systems, failures are loud. But **data conflicts are silent… until they aren’t.** This exception is not noise. It’s a **signal of competing realities** inside your system. --- 🔍 What actually happened? Two independent flows believed: 👉 “I have the latest state.” Only one was right. Hibernate, through **optimistic locking**, enforces a simple rule: > *“You can only update what you truly own — version included.”* When that assumption breaks → **`StaleObjectStateException` is thrown.** --- 🏗️ **Architectural Insight** This is not just about ORM. This is about **concurrency design decisions**: * Are we allowing **parallel writes** on the same aggregate? * Do we understand **transaction boundaries** clearly? * Are we mixing **sync + async updates** without coordination? * Is our system **eventually consistent… or accidentally inconsistent?** --- ⚠️ **Where systems go wrong** * Treating DB rows as isolated records instead of **domain aggregates** * Ignoring **versioning strategy** * Designing APIs without **idempotency** * Overusing long-running transactions * Missing **conflict resolution strategy** --- 🛠️ **Architect-level thinking** ✔️ Model strong **aggregate boundaries** ✔️ Use `@Version` not as annotation, but as a **contract** ✔️ Embrace **retry + reconciliation patterns** ✔️ Prefer **event-driven updates** where contention is high ✔️ Design APIs to be **idempotent and conflict-aware** --- 💭 The real lesso `StaleObjectStateException` is not a Hibernate problem. It’s a **design conversation your system is forcing you to have. Ignore it → you get silent data corruption. Respect it → you build resilient, concurrent systems. Concurrency is not an edge case. It is the system. #Architecture #SystemDesign #Concurrency #Java #Hibernate #DistributedSystems
To view or add a comment, sign in
-
Day 28. I fixed the N+1 problem. Or at least… I thought I did. I had this: @Entity public class User { @OneToMany(mappedBy = "user", fetch = FetchType.LAZY) private List<Order> orders; } And I was careful. I wasn't using EAGER. I avoided obvious mistakes. Still… something felt off. The API was slow. Query count was high. That's when I checked the logs. And saw this: → 1 query to fetch users → N queries to fetch orders Again. Even after "fixing" it. Here's what was actually happening. I was mapping entities to DTOs like this: users.stream() .map(user -> new UserDTO( user.getId(), user.getName(), user.getOrders().size() // 👈 triggers lazy load per user )) .toList(); Looks harmless. But user.getOrders() → triggers lazy loading → inside a loop → causing N+1 again That's when it clicked. N+1 isn't just about fetch type. It's about when and where you access relationships. So I changed it. (see implementation below 👇) What I learned: → LAZY doesn't mean safe → DTO mapping can silently trigger queries → N+1 often hides in transformation layers The hard truth: → You think you fixed it → But it comes back in a different place Writing queries is easy. Controlling when data is accessed is what makes your backend scalable. Have you ever fixed N+1… and then seen it come back somewhere else? 👇 Drop your experience #SpringBoot #Java #Hibernate #BackendDevelopment #Performance #JavaDeveloper
To view or add a comment, sign in
-
-
Dead Letter Queue (When Messages Keep Failing Silently) --- Built:- A background system processing messages from a queue (orders, emails, events). --- Problem I faced:- Everything worked fine… until some messages started failing. Then: Same message kept retrying Logs kept growing Queue got slower Some messages were never processed successfully Worse part? Failures were getting buried in retries. --- What was really happening:- Messages were failing repeatedly with no exit path. Every retry pushed them back into the queue. They kept coming back… again and again. --System was stuck in a loop. --- How I fixed it:- Introduced a Dead Letter Queue (DLQ). Instead of retrying forever: Set a max retry limit After limit → move message to DLQ Logged and monitored failed messages Added manual or automated reprocessing Now: Queue stays clean Failures are isolated No infinite retry loops --- What I learned:- Not every message should be retried forever. Some failures need attention — not repetition. --- Simple mental model:- Think of DLQ like a “quarantine zone”. Healthy messages → processed normally Problematic messages → isolated for inspection --- Carousel Breakdown:- Slide 1 → Messages failing repeatedly Slide 2 → Infinite retries Slide 3 → Queue slowdown Slide 4 → Introduce DLQ Slide 5 → Move failed messages Slide 6 → Inspect & reprocess --- Question In your system, what happens to messages that keep failing… do they stop somewhere, or retry forever? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
The N+1 Query Problem — A Silent Performance Killer In one of my recent backend discussions, we revisited a classic issue that often goes unnoticed during development but can severely impact performance in production — the N+1 Query Problem. What is the N+1 Problem? It occurs when your application executes: 1 query to fetch a list of records (N items) Then executes N additional queries to fetch related data for each record Total = 1 + N queries Example Scenario: You fetch a list of 100 users, and for each user, you fetch their orders separately. That results in 101 database queries instead of just 1 or 2 optimized queries. Why is it Dangerous? 1. Increased database load 2. Slower response time 3. Poor scalability under high traffic 4. Hard to detect in small datasets, but disastrous at scale How to Overcome It? 1. Use Join Fetch (Eager Loading) Fetch related entities in a single query using JOINs. 2. Batch Fetching Load related data in chunks instead of one-by-one queries. 3. Entity Graphs (JPA) Define what relationships should be fetched together dynamically. 4. Use DTO Projections Fetch only required fields instead of entire objects. 5. Caching Strategy Leverage second-level cache to reduce repeated DB hits. 6. Monitor SQL Logs Always keep an eye on generated queries during development. Pro Tip: The N+1 problem is not a bug — it’s a design inefficiency. It often comes from default lazy loading behavior in ORMs like Hibernate. Interview Insight: A good engineer doesn’t just make code work — they make it scale efficiently. #Java #SpringBoot #Hibernate #BackendDevelopment #PerformanceOptimization #Microservices #InterviewPrep
To view or add a comment, sign in
-
🚀 Excited to share something I’ve been working on! I’ve built a production-ready logging library for Python designed to be powerful, flexible, and super easy to use. 🔧 Key Highlights: - ⚡ Zero/low configuration setup - 🧩 Fully configurable via YAML, environment variables, or function parameters - 🎨 Clean & structured logging (console + file) - 🔄 Log streaming support for real-time processing - 🔐 Built-in sensitive data masking - 📦 Supports Kafka streaming with Avro serialization 💡 Why Avro? Using Avro enables compact binary serialization, ensures schema evolution, and provides high performance for streaming pipelines — making logs more efficient and scalable in distributed systems. Whether you're building microservices or large-scale systems, this library is designed to fit seamlessly into production environments. 🔗 Check it out here: https://lnkd.in/g2YzGdgd 🔗 Source Code: https://lnkd.in/gJZhFJP9 Would love to hear your feedback and suggestions! #Python #Logging #BackendDevelopment #Kafka #DistributedSystems #OpenSource #SoftwareEngineering
To view or add a comment, sign in
-
EAGER fetching is silently killing your performance This one annotation can slow down your entire application. And most developers don’t realize it. During a code scan, I found this: @ManyToMany(fetch = FetchType.EAGER) private Set<Specialty> specialties; Looks harmless. 🚨 What’s the problem? 👉 FetchType.EAGER forces Hibernate to: always load the relation perform JOINs automatically fetch the entire collection 👉 even when you don’t need it 💥 Real impact Imagine: loading 100 Vet entities each with multiple specialties 👉 Hibernate will: load vets + JOIN specialties every time Even for endpoints that don’t use specialties. 📊 What we typically see unnecessary JOIN queries increased response time larger result sets higher memory usage 👉 all caused by one annotation ⚠️ Why this is dangerous invisible in code reviews works fine with small datasets explodes under real production data ✅ Fix @ManyToMany(fetch = FetchType.LAZY) private Set<Specialty> specialties; 👉 and load only when needed: @EntityGraph(attributePaths = {"specialties"}) Optional<Vet> findById(Long id); 🧠 Takeaway EAGER loading is convenient. Until your data grows. 🔍 Bonus I built a tool that detects this automatically: 👉 https://joptimize.io It highlights: bad fetch strategies N+1 queries hidden performance bottlenecks Are you sure your entities aren’t loading more data than needed? #JavaDev #SpringBoot #Hibernate #JavaPerformance #Backend #SoftwareEngineering
To view or add a comment, sign in
-
-
Day 26. I stopped using @Data in my JPA entities. Not because it doesn't work. Because it was breaking things I didn’t understand. I used to write this: // ❌ Looks clean — hides real problems @Data @Entity public class User { @Id private Long id; @OneToMany(mappedBy = "user") private List<Order> orders; } Looks clean. Less code. Everything auto-generated. Until it didn’t. Here’s what actually happens: → toString() triggers lazy loading → Infinite recursion in bidirectional relationships → Unexpected database queries → Hard-to-debug logs That’s when it clicked. @Data is not made for entities. It generates: → equals() → hashCode() → toString() And those methods don’t play well with JPA proxies and relationships. So I changed it. (see implementation below 👇) What I learned: → Lombok is powerful — but not always safe → Entities are not simple POJOs → Generated methods can silently break your system The hard truth: → @Data works in tutorials → It fails in real systems → Most developers don’t notice until production Writing less code is easy. Writing safe code is what makes you a backend developer. Are you still using @Data in entities? 👇 #SpringBoot #Java #Hibernate #BackendDevelopment #CleanCode #JavaDeveloper
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development