🚀 Day 2 — Debugging > Coding Today was less about writing code and more about understanding how things actually work. 🧠 DSA: - Started Sliding Window pattern - Practiced reducing brute force → O(n) using window techniques ⚙️ Backend (Spring + Caching): - Debugged a tricky issue with caching - Learned the difference between in-memory cache vs Redis cache - Faced a ClassCastException due to cache schema mismatch (String vs Object) - Understood how Spring caching works internally (proxy-based) 🛠️ Project (URL Shortener): - Fixed caching layer to correctly store shortCode → Url mapping - Ensured proper redirect behavior using cached data - Improved overall flow and reliability 📌 Key learning: It’s easy to write code when things work. Real growth happens when things break and you debug them. #Java #SpringBoot #BackendDevelopment #DSA #LearningInPublic
Debugging and Caching with Java and Spring Boot
More Relevant Posts
-
🚀 Backend Learning Update – Spring REST + DSA Practice Today, I’ve been focusing on both DSA problem solving and Spring RESTful services, strengthening both logic and backend fundamentals. 🧠 DSA Practice Highlights: ✔️ Second smallest element (one-pass optimization) ✔️ Anagram check (sorting & frequency approach) ✔️ Move zeros to end (in-place logic) ✔️ First non-repeating character (HashMap) ✔️ Duplicate detection (Set) ✔️ Missing number (mathematical approach) These problems helped me improve my understanding of time complexity and efficient coding patterns. 💡 Spring REST Learnings: ✔️ Advantages of REST over traditional Spring MVC ✔️ JSON ↔ Java object conversion using Jackson ✔️ Hands-on manual conversion to understand internals ✔️ Introduction to HATEOAS with a demo application ✔️ Usage of ResponseEntity for better API responses & status control 🧪 Testing & Best Practices: ✔️ Started unit testing using JUnit 5 & Mockito ✔️ Learned importance of proper status codes in APIs ✨ This phase really helped me connect problem-solving + real-world backend practices. Step by step, building towards writing scalable and production-ready applications 🚀 #Java #SpringBoot #RESTAPI #DSA #BackendDevelopment #JUnit #Mockito #LearningInPublic #SoftwareEngineering
To view or add a comment, sign in
-
🚀 Day 1 — Focused Learning & Building Today I worked on strengthening fundamentals across problem-solving and backend systems. 🧠 DSA: Arrays & hashing problems (Two Sum, Anagram, Kadane’s Algorithm, Duplicate Value, Best Time To Buy And Sell) Practiced optimizing from brute force → O(n) approaches ⚙️ Backend (Spring Internals): Explored IOC container, bean lifecycle, and dependency injection Looked into how Spring manages beans and uses proxies for AOP 🛠️ Project: Started building a URL shortener service: Base62 encoding for short codes Redis caching with TTL Expiry handling REST APIs for create + redirect 📌 Key learning: Consistency + depth > random learning #Java #SpringBoot #BackendEngineering #DSA
To view or add a comment, sign in
-
A simple .java file triggers a full system: 👉 Compile → Bytecode (.class) 👉 Load → Classes into memory 👉 Link → Verify, prepare, resolve 👉 Initialize → Static data execution 👉 Execute → Interpreter + JIT Behind the scenes 🧠 • Heap → Stores objects • Stack → Handles method calls • Method Area → Class metadata • GC → Automatically cleans memory ⚡ The real power? JVM decides: • When to optimize code (JIT) • How memory is managed • How performance scales That’s why Java isn’t just a language — it’s a runtime ecosystem. 💡 My takeaway: If you understand JVM, you stop writing “just code” and start building efficient systems. Right now, I’m focusing on: Backend + System Design + Cloud ☁️ If you’re learning the same, let’s connect 🤝 #Java #JVM #Backend #SystemDesign #Programming #LearnInPublic #DeepakKumar
To view or add a comment, sign in
-
-
# Day 1 of My Journey: Multithreading → Reactive Programming → WebFlux → Rate Limiter If you want to truly understand Reactive Programming, you cannot skip Multithreading. So I decided to start from the basics instead of jumping directly into frameworks @ Phase 1: Multithreading (Current Focus) Today, I started learning: What is a thread and why it’s expensive How blocking operations waste threads Thread pools & Executor Framework Why thread-per-request model fails under high load @Reality Check Reactive programming is not magic. If you don’t understand: How threads work How thread pools get exhausted Why blocking kills performance You will never truly understand WebFlux. @ Why I’m doing this? Because I want to understand: Why applications crash under high concurrency How modern systems handle millions of requests How to design scalable backend systems @ What’s next? After multithreading, I’ll move to: Reactive Programming (non-blocking mindset) WebFlux & Event Loop Model Reactive Redis Build a Production-Level Rate Limiter @ End Goal To build high-performance systems that can: Handle massive traffic Avoid thread bottlenecks Implement smart control using Rate Limiting I’ll share this journey step by step. If you're learning backend, scalability, or system design — follow along. #Day1 #Multithreading #ReactiveProgramming #WebFlux #RateLimiter #Java #BackendDevelopment #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Spring Boot Learning – @RestController vs @RequestMapping While building REST APIs in Spring Boot, I explored how requests are handled behind the scenes. 🔹 @RestController Used to create REST APIs. It combines @Controller + @ResponseBody and returns data directly (JSON/String). 🔹 @RequestMapping Used to map URLs to classes or methods and can handle different HTTP methods. 💡 Modern Approach (Shortcut Annotations) Instead of using @RequestMapping for everything, Spring provides: -> @GetMapping -> @PostMapping -> @PutMapping -> @DeleteMapping These make code cleaner and more readable. 📌 Key Insight: 👉 @RestController creates APIs 👉 Mapping annotations connect URLs to methods Sharing a simple graphical representation to make this concept easier to understand. 📊 #SpringBoot #Java #BackendDevelopment #RESTAPI #DependencyInjection #JavaDeveloper #LearningInPublic
To view or add a comment, sign in
-
-
Just shipped Phase 6 of my distributed messaging project — a C++ port of the log storage engine, rebuilt at bare metal. The Java implementation (Phase 5) used FileChannel scatter-gather writes: one syscall per append, p50 at 4,432 ns after eliminating GC pauses with off-heap MemorySegment slabs. The question was simple: what's the irreducible cost once you remove the JVM entirely? Result: 16.1 ns per 64-byte append. 3.70 GB/s throughput. That's ~275× faster at p50. Not because Java is slow — Phase 5 Java was already allocation-free on the hot path. The difference is the I/O model. FileChannel crosses the kernel boundary on every write. mmap doesn't. The CPU never leaves userspace. perf stat confirmed it: 68% backend-bound. The bottleneck is store bandwidth to L1D — the irreducible cost of sequential writes. No algorithmic waste to remove. valgrind --tool=massif confirmed zero heap allocation across 1,048,576 appends. The heap is flat from startup to shutdown. What's under the hood: - Lock-free SPSC ring buffer with acquire/release ordering, cache-line-aligned mmap-backed log segments with madvise(MADV_HUGEPAGE) for transparent huge pages - Directory-scanning LogManager with power-of-2 index — zero syscalls on the hot path - Compile-time hardware contracts via C++23 concepts (FitsCacheLine, IsHugePageAligned, IsPowerOfTwo) - Factory pattern via std::expected — no exceptions, no heap on the error path - 18 tests passing, Google Benchmark + Valgrind massif All design decisions documented as ADRs. Code on GitHub. → https://lnkd.in/gifTNMSB #LowLatency #CPlusPlus #HFT #SystemsProgramming #DistributedSystems #SoftwareEngineering #MemoryMappedIO #PerformanceEngineering
To view or add a comment, sign in
-
AI IDEs are quickly becoming the norm. Cursor, Windsurf, AWS Kiro, and Antigravity are the ones I am hearing about the most from my client calls. Four completely different ways of writing Java. If you care about shipping faster with more velocity in your coding, this comparison is a great starting point 👇 https://lnkd.in/g8TW_Rxs
To view or add a comment, sign in
-
Building a strong backend career is like stacking the perfect burger 🍔 Choose your base language, add frameworks, databases, APIs, caching, testing, CI/CD, containerization, and architecture patterns — every layer matters. No shortcuts, just skills layered with consistency. Keep learning. Keep building. Keep scaling. 🚀 #BackendDevelopment #SoftwareEngineering #TechSkills #CareerGrowth #Programming #WebDevelopment #LinkedInLearning
To view or add a comment, sign in
-
-
🚀 Caching Strategies Made Simple! Understanding caching is a game-changer for building fast and scalable applications. From Cache-Aside to LRU & LFU, each strategy plays a crucial role in optimizing performance and reducing load on databases. I recently explored different caching techniques and summarized them into a clean visual for quick understanding. Whether you're building APIs, handling large-scale systems, or optimizing performance—these strategies are essential tools in your toolkit. 💡 Key takeaway: Choosing the right caching strategy depends on your read/write patterns, consistency needs, and system scale. What’s your go-to caching strategy in real-world projects? 🤔 #WebDevelopment #FullStackDeveloper #Python #JavaScript #Java #SystemDesign #BackendDevelopment #SoftwareEngineering #Coding #TechLearning
To view or add a comment, sign in
-
-
Day 15/60 🚀 Multithreading Models Explained (Simple & Clear) This diagram shows how user threads (created by applications) are mapped to kernel threads (managed by the operating system). The way they are mapped defines the performance and behavior of a system. --- 💡 1. Many-to-One Model 👉 Multiple user threads → single kernel thread ✔ Fast and lightweight (managed in user space) ❌ If one thread blocks → entire process blocks ❌ No true parallelism (only one thread executes at a time) ➡️ Suitable for simple environments, but limited in performance --- 💡 2. One-to-One Model 👉 Each user thread → one kernel thread ✔ True parallelism (multiple threads run on multiple cores) ✔ Better responsiveness ❌ Higher overhead (more kernel resources required) ➡️ Used in most modern systems (like Java threading model) --- 💡 3. Many-to-Many Model 👉 Multiple user threads ↔ multiple kernel threads ✔ Combines benefits of both models ✔ Efficient resource utilization ✔ Allows concurrency + scalability ❌ More complex to implement ➡️ Used in advanced systems for high performance --- 🔥 Key Insight - User threads → managed by application - Kernel threads → managed by OS - Performance depends on how efficiently they are mapped --- ⚡ Simple Summary Many-to-One → Lightweight but limited One-to-One → Powerful but resource-heavy Many-to-Many → Balanced and scalable --- 📌 Why this matters Understanding these models helps in: ✔ Designing scalable systems ✔ Writing efficient concurrent programs ✔ Optimizing performance in backend applications --- #Java #Multithreading #Concurrency #OperatingSystems #Threading #BackendDevelopment #SoftwareEngineering #CoreJava #DistributedSystems #SystemDesign #Programming #TechConcepts #CodingJourney #DeveloperLife #LearnJava #InterviewPreparation #100DaysOfCode #CareerGrowth #WomenInTech #LinkedInLearning #CodeNewbie
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development