Added Redis and boom login api failed 😅 ..... In logs getting " JSON parse error: missing type id property '@class'" payload was { "email": "test@gmail.com", "password": "123456" } after debugging learned something new that after i created redis mapper @Bean publicObjectMapperredisObjectMapper() { ObjectMappermapper=newObjectMapper(); mapper.activateDefaultTyping(...) caused to became default mapper and required api to send (lol) { "@class": "com.example.LoginRequestDTO", "email": "test@gmail.com", "password": "123456" } Every JSON must contain type info (@class)” (only need to know which java class to create object from when class is unkown ) Fix 1.removed @bean from mapper 2.Create seperate serializer used only for Redis GenericJackson2JsonRedisSerializer serializer = new GenericJackson2JsonRedisSerializer(); and now MappingJackson2HttpMessageConverter handle my Rest api like it should. #day1 #SpringBoot #Java #BackendDevelopment #Redis #Microservices #APIDevelopment
Fixed Redis Login API Error with Spring Boot
More Relevant Posts
-
🚀 Just leveled up my Spring Boot skills with Redis caching! After diving deep into Redis integration with Spring Boot, I've successfully implemented a robust caching layer for my job portal application. Here's what I learned: 🔧 Tech Stack: - Spring Boot 3.x - Redis (with Docker) - Lettuce connection pool - Jackson for JSON serialization 💡 Key Implementations: 1️⃣ Smart Caching Strategy @Cacheable(value = "companies", key = "'all_companies_admin'") @CacheEvict(value = "companies", allEntries = true) - Reduced database load by 70%+ for read-heavy operations - Automatic cache invalidation on data mutations 2️⃣ Dockerized Redis Setup: redis: image: redis:latest command: ["redis-server", "--appendonly", "yes"] - Persistent storage with AOF (Append Only File) - Connection pooling for optimal performance 3️⃣ Custom Serialization - GenericJackson2JsonRedisSerializer with JavaTimeModule - Polymorphic type handling for complex DTOs - TTL configurations: 30 min default, 1 hour for critical data 📊 Results: ✅ Faster API responses (5-10x improvement) ✅ Reduced database queries ✅ Scalable architecture ready for production 🐳 Pro tip: Always configure connection pooling! The difference in performance is noticeable under load. 🔗 https://lnkd.in/dXCg553r Next up: Implementing Redis Pub/Sub for real-time notifications! 🔄 #SpringBoot #Redis #Java #Caching #Microservices #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
In my previous Redis eviction post, I explained how Redis can use LRU when memory becomes full. But one deeper question came up: What data structure is actually behind LRU? In a normal Java cache, LRU is usually implemented using: HashMap + Doubly Linked List Or simply using: LinkedHashMap But Redis works differently. Redis does not use Java LinkedHashMap. Redis uses its own internal dictionary/hash table, usage metadata, and approximate LRU sampling to decide what to evict. That difference helped me understand Redis caching much more clearly. Java LRU is usually exact. Redis LRU is approximate and optimized for performance. #Redis #LRU #Caching #Java #SpringBoot #SystemDesign #BackendDevelopment #Microservices #DistributedSystems
To view or add a comment, sign in
-
-
Keeping cache consistent with the database is one of the most practical challenges when building scalable systems with Java and Spring Boot. When designing high-performance applications using Spring Boot (with tools like Spring Cache, Redis, or Caffeine), choosing the right caching strategy directly impacts data consistency, latency, and reliability. Here are the most common approaches: 1) Cache Aside (Lazy Loading) The application first checks the cache. If data is missing, it fetches from the database and updates the cache. On updates, the cache is invalidated. ➡️ In Spring Boot: commonly implemented using @Cacheable and @CacheEvict ➡️ Why it works: simple, flexible, and widely adopted in real-world systems 2) Write Through Data is written to both the cache and database at the same time. ➡️ Ensures strong consistency between cache and DB ➡️ Trade-off: increased write latency due to dual writes 3) Write Behind (Write Back) Data is written to the cache first and persisted to the database asynchronously. ➡️ Great for high-throughput systems ➡️ Risk: potential data loss if cache crashes before DB sync 4) TTL (Time-To-Live) Each cache entry expires automatically after a defined duration. ➡️ Easy to implement using Redis TTL configuration ➡️ Trade-off: stale data may be served before expiration Key takeaway: There is no one-size-fits-all strategy. In Spring Boot systems, the choice depends on your consistency requirements, traffic patterns, and failure tolerance. Often, a hybrid approach (Cache Aside + TTL) provides a good balance between performance and data freshness. #SystemDesign #Java #SpringBoot #Caching #Redis #BackendDevelopment #Scalability #SoftwareEngineering #Microservices #PerformanceOptimization
To view or add a comment, sign in
-
🚨 I thought Redis = just caching… I was wrong. While building my Spring Boot project, I used Redis assuming: 👉 “It’s just a second-level cache to avoid hitting the DB” But then I implemented a virality scoring system… and everything changed. ❌ My initial thinking: Redis = store DB data temporarily Use it to reduce queries to PostgreSQL ⚡ What I actually built: A system where: • Bot reply → +1 score • Human like → +20 score • Human comment → +50 score 👉 These values update in real-time using Redis. 🤯 Realization: This data is NOT coming from DB. It is created, updated, and managed entirely inside Redis. ✅ What Redis actually became in my project: • Real-time counter system (increment) • Cooldown manager (TTL expiry) • Fast in-memory engine for dynamic scoring 💡 Key Insight: Redis is not just caching. 👉 It’s a real-time data engine for: counters rate limiting ranking systems temporary logic 🛠️ Want to see the actual implementation? GitHub Repo: https://lnkd.in/gWDsRXqD 🧠 Lesson: If you only use Redis as a cache, you’re using maybe 30% of its power. Next: Upgrading this using Sorted Sets (ZSET) to build a real “trending posts” system 🚀 #Java #SpringBoot #Redis #BackendDevelopment #SystemDesign
To view or add a comment, sign in
-
Spent 2 days debugging slow API response times. Turned out we were hitting the database for the same data on every single request. User profile. Permissions. Config settings. All fetched fresh every time. The fix was embarrassingly simple. Redis cache with a 5 minute TTL. Before: 850ms average response time After: 180ms average response time 78% faster. No code refactor. No architecture change. Just stopped asking the database questions it already answered. Sometimes the bottleneck is not your code. It is how many times you ask the same question. What is the simplest fix that gave you the biggest performance win? #Java #Redis #Performance #Backend #SpringBoot
To view or add a comment, sign in
-
Built a Distributed Rate Limiter as a Service over the weekend. Not because it was assigned. Because I wanted to actually understand the tools I've been reading about — Redis, Kafka, distributed systems patterns — not just know their names. Here's what it does: → Exposes a single endpoint any upstream service can call before processing a request → Supports 3 rate limiting algorithms — Fixed Window, Sliding Window, and Token Bucket → Redis handles every allow/deny decision on the hot path (sub-millisecond) → Kafka streams every request event asynchronously to PostgreSQL for analytics → Fully containerised with Docker Compose — one command to run everything The engineering decisions I'm most proud of: 𝗧𝗼𝗸𝗲𝗻 𝗯𝘂𝗰𝗸𝗲𝘁 𝘃𝗶𝗮 𝗟𝘂𝗮 𝘀𝗰𝗿𝗶𝗽𝘁 — the check-refill-decrement sequence needs to be atomic. Two concurrent requests could both read tokens=1, both pass, and both decrement — resulting in -1 tokens. Redis executes Lua scripts atomically (single-threaded), so no locks, no race conditions. 𝗞𝗮𝗳𝗸𝗮 𝗱𝗲𝗰𝗼𝘂𝗽𝗹𝗶𝗻𝗴 — analytics events are published to Kafka and consumed asynchronously. The HTTP response never waits for a DB write. If Postgres is slow or temporarily down, rate limiting keeps working perfectly. 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆 𝗽𝗮𝘁𝘁𝗲𝗿𝗻 — each algorithm implements one interface. Adding a fourth algorithm means one new class and one enum value. Nothing else changes. GitHub: https://lnkd.in/gdWrtQ5w #java #kafka #redis #backend #springboot #microservice
To view or add a comment, sign in
-
I've just spent the holiday building 4 backend systems, each targeting a different domain: 1. ecom-poc (Go): Kafka outbox, Redis, gRPC, Elasticsearch https://lnkd.in/gJUPY2hG 2. uber-poc (Java): reactive WebFlux, Redis GEO, real-time location matching https://lnkd.in/gVGxztEe 3. chat-poc (Node.js): SocketIO, Redis Pub/Sub, presence, typing indicator https://lnkd.in/g95GrXqW 4. dropbox-poc (Go): file chunking, SHA-256 dedup, delta sync, WebSocket conflict detection https://lnkd.in/gq32ZH-G #backend #golang #java #typescript P.S. If you find it useful, you can star the repos.
To view or add a comment, sign in
-
-
Spent the last few weeks going really deep into PostgreSQL Global Development Group internals Learned how MVCC actually handles transactions under the hood and how WAL ensures data never gets lost even when things crash. The query planner and vacuum process completely changed how I think about writing queries. Then went through NodeJS Developer internals how libuv actually drives the event loop and what really happens when you write async code. The difference between microtask and macrotask queues finally clicked for me at a deeper level. Now starting Redis internals. Excited to understand how it handles memory encoding and why the persistence mechanisms are designed the way they are. Honestly going deep into how these tools actually work has made me a better engineer than any tutorial ever could If you have good resources on Redis internals drop them below 👇 #PostgreSQL #NodeJS #Redis #BackendDevelopment #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Getting Started with Redis – Fast, Simple, Powerful! Redis is an open-source, in-memory data store used as a database, cache, and message broker. It’s widely used in modern applications for its lightning-fast performance ⚡ 🔹 Why Redis? In-memory storage → super fast data access Supports multiple data structures (Strings, Lists, Sets, Hashes) Ideal for caching, session management, and real-time analytics 🔹 Common Use Cases: ✔️ Caching frequently accessed data ✔️ Storing user sessions ✔️ Real-time leaderboards & analytics ✔️ Message queues & pub/sub systems 🔹 Basic Redis Commands: SET key value → Store data GET key → Retrieve data DEL key → Delete data 💡 If you're working with Java & Spring Boot, Redis integrates easily using Spring Data Redis for caching and performance optimization. 📈 Learning Redis is a great step toward building scalable and high-performance backend systems! #Redis #BackendDevelopment #Java #SpringBoot #Caching #SoftwareDevelopment #LearningJourney
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development