🚀 Day 16/100: Spring Boot From Zero to Production Topic: Custom Logging We’ve covered the basics in last post. Let's talk about how to do production grade custom logging. In production, logs aren't for humans, they are for Log Aggregators like ELK, Splunk, or Datadog. Structured Logging (JSON): Plain text logs are hard to search. Spring Boot now supports Structured Logging out of the box. ->JSON allows you to filter by specific fields (e.g., userId or traceId) without complex regex. ->Simply set logging.structured.format.console=json in your properties. No extra libraries required! Custom XML Configurations: When you need "Log Rotation" or different patterns for different environments, use logback-spring.xml. -> Use <springProfile name="prod"> to ensure your production logs are concise while Dev stays verbose. -> Send logs to the console, files, and a remote socket simultaneously. Contextual Logging (MDC): Ever tried to find logs for a specific user request in a sea of data? Mapped Diagnostic Context (MDC) is your best friend. -> Store a correlation_Id in the MDC at the start of a request. -> Every log line triggered by that request will automatically include that ID, making debugging a breeze. Performance Matters... In high-traffic apps, logging can become a bottleneck. ->Use an AsyncAppender in your Logback config. It moves logging tasks to a separate thread so your main logic stays fast. ->Avoid String Concatenation: Use placeholders like log.info("User {} logged in", username) to avoid wasted memory. Feel free to add anything in the comments below. #Java #SpringBoot #SoftwareDevelopment #100DaysOfCode #Backend
Spring Boot Custom Logging for Production with ELK and Logback
More Relevant Posts
-
Lately, my teammates and I have been diving deep into the inner workings of Operating Systems—studying process states, concurrency, and thread scheduling. Instead of just reading about these concepts, we decided to put the theory into practice by building a system that actually relies on them. I’m incredibly proud to share that our team just finished engineering a fault-tolerant Distributed File System (DFS) from scratch using Java! 🚀 Instead of relying on heavy abstractions or frameworks like Spring Boot, we wanted to understand the raw mechanics of how enterprise storage systems (like HDFS or AWS S3) manage data, network traffic, and hardware failures. Here is what we built under the hood: ⚡ Custom TCP Protocol: Bypassed REST entirely for internal node communication, utilizing raw TCP Sockets for ultra-low latency binary streaming. 🧠 Concurrent Memory Safety: Designed the Master Node using a ConcurrentHashMap and thread pools to handle asynchronous web requests with constant-time lookups and zero memory corruption. 🔄 Auto-Recovery & Fault Tolerance: Engineered a replication algorithm (Factor of 3) with background Heartbeat daemons. If a Data Node OS process is terminated mid-operation, the Master instantly detects the failure and self-heals the download using backup replicas. 📊 Real-Time Visual Dashboard: Built a decoupled, asynchronous JavaScript/HTML/CSS frontend to map file chunks and monitor live node health. Building this collaboratively forced us to navigate complex systems engineering challenges together—from breaking network socket buffer deadlocks to managing disk I/O with Java NIO. It was an incredible way to bridge the gap between OS theory and real-world distributed architecture, and I couldn't have asked for a better team to build this with! 🤝 A huge shoutout to Harkeerat Singh and Niharika Berry for the late-night debugging sessions and brilliant code contributions. If you want to see the code or run a "Chaos Monkey" test on our cluster yourself, check out the repository here: https://lnkd.in/g7pJP7Zf #SoftwareEngineering #Java #DistributedSystems #ComputerScience #Networking #BackendEngineering #Teamwork #SystemsDesign
To view or add a comment, sign in
-
I’ve been spending a lot of time recently looking under the hood of Operating Systems—studying process states, concurrency, and thread scheduling. Instead of just reading about these concepts, I wanted to put the theory into practice by building a system that actually relies on them. I just finished engineering a fault-tolerant Distributed File System (DFS) from scratch using Java! 🚀 Instead of relying on heavy abstractions or frameworks like Spring Boot, I wanted to understand the raw mechanics of how enterprise storage systems (like HDFS or AWS S3) manage data, network traffic, and hardware failures. Here is what I built under the hood: ⚡ Custom TCP Protocol: Bypassed REST entirely for internal node communication, utilizing raw TCP Sockets for ultra-low latency binary streaming. 🧠 Concurrent Memory Safety: Designed the Master Node using a ConcurrentHashMap and thread pools to handle asynchronous web requests with constant-time lookups and zero memory corruption. 🔄 Auto-Recovery & Fault Tolerance: Engineered a replication algorithm (Factor of 3) with background Heartbeat daemons. If a Data Node OS process is terminated mid-operation, the Master instantly detects the failure and self-heals the download using backup replicas. 📊 Real-Time Visual Dashboard: Built a decoupled, asynchronous JavaScript/HTML/CSS frontend to map file chunks and monitor live node health. Building this forced me to navigate complex systems engineering challenges, from breaking network socket buffer deadlocks to managing disk I/O with Java NIO. It was an incredible way to bridge the gap between OS theory and real-world distributed architecture. If you want to see the code or run a "Chaos Monkey" test on the cluster yourself, check out the repository here: https://lnkd.in/gdsA2Hwm #SoftwareEngineering #Java #DistributedSystems #ComputerScience #Networking #WebDevelopment #BackendEngineering
To view or add a comment, sign in
-
🚀 Day 6 – HashMap vs ConcurrentHashMap (When Thread Safety Matters) Today I explored the difference between "HashMap" and "ConcurrentHashMap". We often use "HashMap" like this: Map<String, Integer> map = new HashMap<>(); 👉 But here’s the catch: "HashMap" is not thread-safe In a multi-threaded environment: - Multiple threads modifying it can lead to data inconsistency - Even cause infinite loops during resizing (rare but critical) So what’s the alternative? Map<String, Integer> map = new ConcurrentHashMap<>(); 👉 "ConcurrentHashMap" is designed for safe concurrent access 💡 Key difference I learned: ✔ "HashMap" - No synchronization - Faster in single-threaded scenarios ✔ "ConcurrentHashMap" - Uses segment-level locking / fine-grained locking - Allows multiple threads to read/write safely ⚠️ Insight: Instead of locking the whole map, it locks only a part of it → better performance than traditional synchronization. 💡 Real-world use: Whenever multiple threads are accessing shared data (like caching, session data), "ConcurrentHashMap" is a safer choice. #Java #BackendDevelopment #Concurrency #JavaInternals #LearningInPublic
To view or add a comment, sign in
-
🚀 Deep Internal Flow of a REST API Call in Spring Boot 🧭 1. Entry Point — The Gatekeeper DispatcherServlet is the front controller. Every HTTP request must pass through this single door. FLOW: Client → Tomcat (Embedded Server) → DispatcherServlet 🗺️ 2. Handler Mapping — Finding the Target DispatcherServlet asks: “Who can handle this request?” It consults: * RequestMappingHandlerMapping This scans: * @RestController * @RequestMapping FLOW : DispatcherServlet → HandlerMapping → Controller Method Found ⚙️ 3. Handler Adapter — Executing the Method Once the method is found, Spring doesn’t call it directly. It uses: * RequestMappingHandlerAdapter Why? Because it handles: * Parameter binding * Validation * Conversion FLOW : HandlerMapping → HandlerAdapter → Controller Method Invocation 🧭 4. Request Flow( Forward ): Controller -> Service Layer (buisiness Logic) -> Repository Layer -> DataBase 🔄 5. Response Processing — The Return Journey Now the response travels back upward: Repository → Service → Controller → DispatcherServlet -> Tomcat -> Client. ———————————————— ⚡ Hidden Magic (Senior-Level Insights) 🧵 Thread Handling * Each request runs on a separate thread from Tomcat’s pool 🔒 Transaction Management * Managed via @Transactional * Proxy-based AOP behind the scenes 🎯 Dependency Injection * Beans wired by Spring IoC container 🧠 AOP (Cross-Cutting) * Logging, security, transactions wrapped around methods ⚡ Performance Layers * Caching (Spring Cache) * Connection pooling (HikariCP) ———————————————— 🧠 The Real Insight At junior level i thought: 👉 “API call hits controller” At senior level i observe: 👉 “A chain of abstractions collaborates through well-defined contracts under the orchestration of DispatcherServlet” #Java #SpringBoot #RestApi #FullStack #Developer #AI #ML #Foundations #Security
To view or add a comment, sign in
-
-
How a Simple Query Optimization Improved API Performance by 60%? We often jump to scaling systems with caching, load balancers, etc. But sometimes, the bottleneck is much simpler: bad queries. In one of my projects, API response time was consistently high. 🔍 Root cause: Complex joins Missing indexes Inefficient filtering 💡 What we did: ✅ Added proper indexing on frequently queried columns ✅ Refactored heavy joins ✅ Reduced unnecessary data fetching 🔥 Result: 👉 ~60% reduction in API response time (No infrastructure changes required) ⚙️ Example: Before: Full table scan → slow After: Indexed lookup → fast 📌 Lesson: Before scaling your system, make sure your database is not the bottleneck. #Java #SpringBoot #Microservices #SystemDesign #BackendEngineering #SoftwareArchitecture
To view or add a comment, sign in
-
Had one of those “everything looks fine… but it’s not” production moments recently. An API that usually responds in ~120ms suddenly started taking 2–3 seconds. No errors. No crashes. Just… slow. At first glance, nothing obvious: CPU was okay, memory wasn’t maxed out, service was up. But digging deeper turned into a good reminder of how real-world slowness actually happens 👇 --- Started with threads. Tomcat thread pool was almost full. Not completely exhausted, but close enough that new requests were waiting. So the service wasn’t doing more work — it was just taking longer to start doing the work. --- Then the DB. One query that used to take ~20ms was now taking ~150ms. Why? Data had grown. Index wasn’t helping anymore the way we expected. And of course… there was a hidden N+1 query in one flow. Didn’t matter in testing. Hurt in production. --- Then downstream calls. This API was calling 2 other services. Individually fast (~50–80ms), but together they added up. And when one of them slowed slightly, everything stacked. No timeout issues. Just latency compounding quietly. --- The interesting part? None of these were “major bugs”. It was: – slightly slower DB – slightly busy threads – slightly delayed downstream service All happening together. --- And that’s when it hits you: We don’t usually design systems to fail — we design them assuming things will stay fast. But in reality, systems degrade, not break. --- What helped: Stopped guessing. Looked at: – thread metrics – DB query timings – per-service latency Fixed the biggest contributor first (DB query + fetch strategy), and suddenly everything else started looking normal again. --- Big takeaway for me: Performance issues in microservices are rarely dramatic. They’re gradual, layered, and easy to miss until users feel them. And debugging them is less about “what’s broken?” and more about “where is time actually going?” #Java #SpringBoot #Microservices #ProductionIssues #BackendEngineering #SystemDesign
To view or add a comment, sign in
-
Are you still loading everything into memory? Let me ask you something. How many times have you seen this in a codebase? repository.findAll() ☠️ Looks harmless, right? Until it isn’t. That single line can: • Pull millions of records into memory • Fill your Hibernate context • Trigger massive GC pressure • And eventually… crash your application This is not a performance issue. This is an architectural flaw. Most systems don’t fail because of complexity. They fail because of unbounded data processing. You are not controlling how much data your system loads, processes, or returns. And memory… has limits. So what’s the right approach? Stop thinking: "How do I get all the data?" Start thinking: "How do I guarantee I NEVER load too much?" The solution is simple (but often ignored): • Use pagination for APIs • Use streaming for large exports • Process data in controlled chunks • Return DTOs instead of heavy entities • Set hard limits Golden rule: Never load, process, or serialize everything at once. Always paginate, stream, or limit. Most production outages I’ve seen had one thing in common: Someone assumed the data would always be small. It never is. #SoftwareArchitecture #Java #SpringBoot #DistributedSystems #SoftwareEngineering #DesignPatterns
To view or add a comment, sign in
-
-
Day 10/30 — If you can’t trace a failed request across services in under 2 minutes, your logging is broken. Most teams realize this during an incident. At 2 AM. With leadership asking, “What happened?” A user reports: “My order failed.” You check: Order Service → request looks fine Payment Service → no record API Gateway → thousands of requests, impossible to isolate one 45 minutes later, you’re still grepping logs across 5 services. That’s not a debugging problem. That’s a logging architecture problem. 3 things every production log must have 1️⃣ Structure — log JSON, not sentences Human‑readable logs don’t scale. Machine‑queryable logs do. Structured logs let you filter by orderId, userId, traceId, amount, latency — instantly. When you have millions of log lines, you don’t read. You query. 2️⃣ Correlation — one traceId everywhere Without a correlation ID: Gateway logs are one story Order logs another Payment logs a third With a single traceId, they become one timeline. One query should tell you: When the request entered Which service failed Why At which millisecond If you need multiple terminal windows and manual grep… you’ve already lost. 3️⃣ Centralization — all logs, one place Logs on individual servers are effectively invisible. Ship everything to a central system: ELK, Datadog, Loki, CloudWatch — pick your poison. Key rule: ✅ Log to stdout ✅ Let your platform collect & forward ❌ Don’t SSH into servers to read files If logs aren’t searchable centrally, they don’t exist during incidents. What to log (and what not to) ✅ Request entry & exit (with duration) ✅ Every external call ✅ Every exception with full context ✅ Every state transition (order created → payment started → failed) ❌ Tight loops ❌ Sensitive data (passwords, cards, tokens) ❌ DEBUG by default in production INFO + structured fields + traceId beats verbose noise every time. The rule that covers everything: A developer who’s never seen your system should be able to: Take a traceId from a customer complaint Reconstruct exactly what happened Across all services Without touching a single server If that’s not true today, your logging isn’t done yet. #microservices #springboot #java #backend #softwareengineering
To view or add a comment, sign in
-
Most transaction bugs in Spring Boot are not SQL bugs—they’re transaction boundary bugs. Today’s focus is a deep dive into @Transactional: propagation, isolation, and rollback rules. If you only use the default settings everywhere, you may accidentally create hidden data inconsistencies or unexpected commits. Example: @Service public class PaymentService { @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED, rollbackFor = Exception.class) public void processPayment(Order order) { paymentRepository.save(new Payment(order.getId(), order.getTotal())); inventoryService.reserve(order.getItems()); } } Key idea: REQUIRED joins an existing transaction or starts a new one, REQUIRES_NEW creates a separate one, and isolation controls visibility of concurrent changes. By default, rollback happens for unchecked exceptions, so checked exceptions often need explicit rollbackFor. Treat @Transactional as an architectural decision, not just an annotation. #Java #SpringBoot #BackendDevelopment
To view or add a comment, sign in
-
-
Most transaction bugs in Spring Boot are not SQL bugs—they’re transaction boundary bugs. Today’s focus is a deep dive into @Transactional: propagation, isolation, and rollback rules. If you only use the default settings everywhere, you may accidentally create hidden data inconsistencies or unexpected commits. Example: @Service public class PaymentService { @Transactional(propagation = Propagation.REQUIRED, isolation = Isolation.READ_COMMITTED, rollbackFor = Exception.class) public void processPayment(Order order) { paymentRepository.save(new Payment(order.getId(), order.getTotal())); inventoryService.reserve(order.getItems()); } } Key idea: REQUIRED joins an existing transaction or starts a new one, REQUIRES_NEW creates a separate one, and isolation controls visibility of concurrent changes. By default, rollback happens for unchecked exceptions, so checked exceptions often need explicit rollbackFor. Treat @Transactional as an architectural decision, not just an annotation. #Java #SpringBoot #BackendDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development