⚡ Caching Service Registry 📈 Smart Routing Concept (Node.js + React Example⏩) 🧠 Calling service registry on every request comes with set of problems: 1. Slower APIs 2. Registry overload 3. Unnecessary network hops Solution includes - 1. Cache service instances locally in your API Gateway (Node.js) 2. React just calls the gateway (no direct registry calls) Flow - Refer the image for example. 🔥 Advanced Patterns 📡 1. Watch-Based Updates Consul supports real-time updates No polling needed ⚖️ 2. Smart Load Balancing Use: Round Robin Weighted routing Health-aware routing 💥 3. Fail-Safe Mode JavaScript Code- try { return await cache.getService("product-service"); } catch (err) { console.log("Registry down, using stale cache..."); return cache.cache["product-service"] || []; } Takeaway - Modern systems (like service mesh with Envoy Proxy) implement- ✔ Cache service discovery ✔ Continuously sync ✔ Combine with health check Registry should be queried occasionally, not per request. #Node #React #JavaScript #Microservices #Software #Caching #Speed #ScalableSystems #Engineering #Learning #Technical #Careers
Caching Service Registry for Faster Microservices with Node.js and React
More Relevant Posts
-
Everyone explains what happens when you type a URL. DNS. TCP handshake. HTTP request. We've all seen that. Nobody talks about what happens AFTER it hits your server. Here's the full internal journey of a single register request inside a production Node.js backend 👆 7 layers. Each with exactly one job. The part most tutorials skip — Your controller should never touch the database. Your service should never know HTTP exists. Your repository should be the ONLY file that knows which ORM or database you're using. Break any of these rules and you'll feel it the moment you try to write a unit test or swap a dependency. This pattern is called Layered Architecture. #NodeJS #ExpressJS #BackendDevelopment #TypeScript #SystemDesign #JavaScript #WebDevelopment #SoftwareEngineering #LearningInPublic #CleanCode
To view or add a comment, sign in
-
-
I removed Express from my Node.js project. Then removed the http module too. Built everything from raw TCP and finally understood what was actually happening. Three things that clicked: → request.body is a stream, not a property Node reads your request in chunks. That’s why even very large uploads don’t crash your server. → GET, POST, and PUT are not interchangeable Send the same POST twice, two records get created. Send PUT twice, nothing changes. That difference has a name: idempotency. → Postman is just a GUI Every button maps to three things: method, headers, and body. Wrote a 4-part breakdown. #NodeJS #JavaScript #BackendDevelopment #medium Read the full article here: https://lnkd.in/dWQRp7Ta
To view or add a comment, sign in
-
🚀 Next.js (Advanced) — What Actually Matters in Production Most developers use Next.js for routing. Real value comes from understanding its architecture. ⚡ Advanced Concepts You Should Know: - Server Components → move logic to server, reduce client bundle - Caching Model → fetch caching, revalidation, request deduping - Server Actions → eliminate API layer for mutations - Streaming UI → send partial HTML using Suspense - Edge Runtime → ultra-fast middleware & personalization - Rendering Strategy → SSR vs SSG vs ISR based on data patterns 🧠 Engineering Insight: Bad performance in Next.js is usually caused by: - Overusing Client Components - Wrong caching strategy - Unnecessary API layers 🔥 Production Mindset: - Push maximum logic to server - Keep client JS minimal - Design data flow, not just UI - Think in terms of latency & caching 💡 If you understand this, you’re not “using Next.js” You’re engineering with it. #NextJS #SoftwareEngineering #WebPerformance #FullStack #JavaScript
To view or add a comment, sign in
-
-
👀 TanStack just shipped React Server Components. And it works completely differently from what you're used to 👇 In most RSC frameworks, the server owns your component tree. You write components, they run on the server by default, and you opt into interactivity with 'use client'. The server decides the final shape of everything. TanStack Start flips that. RSCs are just streams of data that you fetch, cache, and render on your own terms from the client side. 🤔 Why does that matter? In the current model, every time you need new UI in response to user actions, you're going back to the server to rebuild and reconcile. Your app's lifecycle is constrained by the framework's conventions. With TanStack Start, you create an RSC with renderToReadableStream on the server, call it from a server function, and decode it on the client with createFromReadableStream. Three primitives, that's the whole API surface. Because they're just streams, you can cache them anywhere: in TanStack Query with explicit cache keys and staleTime, in the Router cache through loaders, behind a CDN, in memory, wherever your architecture already handles bytes. 📊 They tested it on tanstack website Blog pages dropped about 153 KB gzipped from client JS. Docs pages saw similar reductions. Total Blocking Time on one blog page went from 1,200ms to 260ms. But some landing pages were basically flat, and a few got slightly worse. Pages dominated by interactive UI don't magically get faster just because you thread a server component into the tree. 🧩 The really interesting part: Composite Components Most RSC systems let the server decide where client components render. Composite Components do the opposite; the server leaves "slots" (children, render props) where client UI can go, without needing to know what fills them. The server renders the static parts and says, "something interactive goes here." The client fills those slots with regular components; ⚠️ It's still experimental, and they intentionally don't support 'use server' actions because of recent security CVEs in other RSC stacks. All client-server communication goes through explicit createServerFn RPCs. I think treating RSCs as a cacheable data primitive instead of a framework paradigm is the right direction. Link to the announcement in the comments 👇 #react #tanstack #rsc #webdev #javascript
To view or add a comment, sign in
-
-
If your applications run on Express.js (and millions do), this blog is worth your time. This breakdown covers: - Supported versions and timelines - What EOL actually means for your risk posture - Migration paths and blockers - Options for long‑term support https://lnkd.in/dd2hJ8NE #Nodejs #Express #JavaScript #EOL
To view or add a comment, sign in
-
The Queue That Saved Our PDF Pipeline We had a feature that generated detailed reports on demand. A React button, a NestJS endpoint, Puppeteer spinning up a headless browser. Simple enough. Until it was not. At some point, a user triggered 40 reports at once. The server ran out of memory. The request timed out. The user got nothing. The logs were a disaster. The fix was not more RAM. It was BullMQ. The principle is straightforward: do not do expensive work inside a request-response cycle. Accept the request, enqueue the job, return a job ID immediately. The client polls or listens for status. The worker processes jobs one at a time, or in controlled concurrency. Here is what that shift looked like in practice for the PDF pipeline: The NestJS controller goes from calling a service directly to calling queue.add() with a payload. The response changes from a file stream to a job ID and a status URL. A separate worker class, decorated with @Processor, handles the actual Puppeteer work. BullMQ manages retries automatically when Puppeteer crashes. A Bull Board dashboard gives full visibility into pending, active, and failed jobs. The result was not just stability. It was observability. Suddenly we could see exactly which reports were stuck, retry them individually, and set priority on urgent jobs without touching code. If your application does anything slow, anything that involves a third-party call, file generation, email sending, or data processing, that work belongs in a queue. Not in a controller. The request-response cycle is for acknowledgment. The queue is for work. #NestJS #NodeJS #BullMQ #SoftwareArchitecture #BackendDevelopment #WebDevelopment #QueueProcessing #Puppeteer #OpenSource
To view or add a comment, sign in
-
-
🚀 Switched to React Query for API handling Earlier, I was managing API calls with useEffect — handling loading, errors, and refetching manually. Started using TanStack Query (React Query) and it simplified everything. Key learnings: • Built-in caching and automatic refetching • Cleaner handling of loading & error states • Managing server state instead of manual fetching Small change, but it improved both code quality and performance significantly. #React #TanStackQuery #Frontend #JavaScript #LearningInPublic
To view or add a comment, sign in
-
🚨 "async/await" makes asynchronous code look simple… But it’s also one of the easiest ways to introduce subtle bugs. Over the years, I’ve seen (and made) these mistakes more times than I’d like to admit. Here are some of the most common "async/await" mistakes that can cause real production issues 👇 💡 1. Forgetting to use "await" const data = fetch('/api/users'); // Promise, not actual data console.log(data); ✅ Correct: const data = await fetch('/api/users'); 💡 2. Using "await" inside loops unnecessarily for (const id of ids) { const user = await fetchUser(id); } This runs sequentially and can be painfully slow. ✅ Better: const users = await Promise.all(ids.map(fetchUser)); 💡 3. Missing error handling const data = await fetchData(); If the request fails, your app may crash. ✅ Always wrap critical async calls: try { const data = await fetchData(); } catch (error) { console.error(error); } 💡 4. Mixing ".then()" with "await" const data = await fetch(url).then(res => res.json()); It works, but it’s inconsistent and harder to read. ✅ Prefer one style: const res = await fetch(url); const data = await res.json(); 💡 5. Awaiting independent tasks one by one const user = await fetchUser(); const posts = await fetchPosts(); These can run in parallel. ✅ Better: const [user, posts] = await Promise.all([ fetchUser(), fetchPosts() ]); 💡 6. Not handling rejected promises in "Promise.all()" If one promise fails, the entire batch fails. 👉 Use "Promise.allSettled()" when partial success is acceptable. 🔥 "async/await" improves readability — but understanding how it behaves is what makes your code truly reliable. #JavaScript #JS #es6 #react #reactjs #AsyncAwait #WebDevelopment #Frontend #Programming #SoftwareEngineering
To view or add a comment, sign in
-
Last weekend, I built something I’m really excited about: Nextpressjs A zero-dependency Node.js HTTP framework built from scratch with a focus on performance, scalable architecture, clean, minimal design But I didn’t stop at just building it. I benchmarked it against popular frameworks: Benchmark Results: (Autocannon — 100 connections, pipelining 10) Nextpress → 121,843 req/s Raw Node HTTP → 134,406 req/s Hono → 100,077 req/s Koa → 83,731 req/s Fastify → 81,491 req/s Express 5 → 69,843 req/s That’s ~75% faster than Express And ~90% of raw Node.js performance Average latency: ⏱️ 7.7 ms (Nextpress) vs 13.8 ms (Express) Open-sourced for developers who care about performance and clean architecture. Nextpress Official: https://lnkd.in/gzVAwy49 Github Repo: https://lnkd.in/gkPwfRTt npm: https://lnkd.in/gc5iyq7y #opensource #npm #nodejs
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development