I rebuilt CareerQ from scratch. Not a refactor. A complete restart. The original was a tightly coupled Next.js full-stack app built while I was still learning. It worked, but the backend had no identity of its own. Everything was tangled with the frontend. So I separated them. CareerQ v2 is a backend-only Node.js service. No framework opinions. No frontend pulling the architecture in two directions. Here is what the new architecture looks like and why I made each decision: Express.js + TypeScript strict mode No any types. No shortcuts. Every input validated at the boundary before it reaches the service layer. MongoDB for LLM response storage LLM outputs are unpredictable JSON. A flexible document store handles this without fighting schema migrations every time the AI response shape changes. Zod for runtime validation TypeScript catches compile-time errors. Zod catches what TypeScript cannot - malformed requests at runtime. Both layers active on every endpoint. Layered architecture: Routes -> Controllers -> Services -> Models Each layer has one job. Controllers do not touch the database. Services do not know about HTTP. Clean separation makes the codebase testable and replaceable. JWT auth with bcrypt password hashing Stateless authentication. Tokens expire. Passwords never stored in plain text. Centralized error handling One error middleware handles everything - Zod failures, JWT errors, DB errors, LLM parsing failures. No scattered try-catch blocks returning inconsistent response shapes. Paginated interview history Sortable, configurable page size, returns total count and page metadata. Clients never have to guess what is on the next page. The image attached is the actual API response - real AI-generated interview questions coming back from the live service. What is coming next: SSE streaming so LLM tokens arrive in real-time instead of a single blocking response. Redis-backed rate limiting per user on LLM endpoints. Live API: https://lnkd.in/gNcx32et GitHub: https://lnkd.in/gStKFJ3C Building in public. Backend only from here. #NodeJS #TypeScript #BackendDevelopment #LLM #SystemDesign #AI #GenrativeAI #Learning #Growth
CareerQ v2: Node.js Backend Service with TypeScript and MongoDB
More Relevant Posts
-
🚀 I just built an AI-powered Document Processing System Over the last few days, I worked on a project that combines backend architecture, async processing, and AI integration into a single system. 👉 Live demo: https://lnkd.in/dKzZGt3a 👉 Repo: https://lnkd.in/dbf9XpRd What it does: - Upload documents (PDF, DOCX, TXT, images, etc.) - Process them asynchronously using queues - Extract text and compute statistics - Generate AI summaries per document - Track progress in real time via WebSockets - Control execution (start / pause / resume / stop) Tech stack: NestJS, PostgreSQL, Redis + BullMQ, React, Cloudflare R2, Socket.io, Docker, and Groq API for AI. Some interesting takeaways: • AI provider abstraction really matters I used a Strategy pattern for the AI layer, so I could switch from a local model (Ollama) to Groq instantly — no code changes, just an env variable. • Async systems get complex fast Handling pause/resume in batch processing is not trivial. I chose consistency over immediacy: state changes apply between batches, not mid-execution. • Rate limits are part of the system, not an edge case Retries with backoff and dynamic wait times were key to keeping the pipeline stable. • Separation of concerns pays off Each module has a clear responsibility (process, document, analysis, queue, AI, realtime), which keeps the system maintainable and scalable. This project was a great exercise in building production-style backend systems, not just APIs — thinking about scalability, resilience, and real-world constraints. If you're working on similar problems or exploring AI + backend architectures, would love to connect 🤝 #Backend #NodeJS #NestJS #AI #SoftwareEngineering #Docker #PostgreSQL #Redis #SystemDesign
To view or add a comment, sign in
-
Just shipped a Node.js AI backend from scratch! Built a production-style LLM server with: Custom system prompt + personality design Free LLM API integration (no keys hardcoded, .env based) Conversation memory (context-aware replies, long-term-ready) Clean REST endpoints, tested via Postman This project forced me to think like a backend engineer and a prompt engineer at the same time – not just “call the model”, but design how it thinks, remembers, and responds. Repo is live on GitHub – open to feedback, suggestions, and collaboration on smarter AI agents 🤝 #NodeJS #BackendDevelopment #LLM #GenerativeAI #APIDevelopment #JavaScript #OpenSource #StudentDeveloper #AIProjects
To view or add a comment, sign in
-
🚀 Just built a "Dual-Engine" AI application using Spring Boot & React! Recently, I’ve been diving deep into the Spring AI framework. I wanted to build an architecture that could handle the best of both worlds: Cloud AI for heavy lifting and Local AI for offline/private tasks. Here is what I put together: 1️⃣ The Backend (Java/Spring Boot): Integrated both Google Gemini 2.5 Flash and a local Ollama model (Gemma 2) running side-by-side in the same application. 2️⃣ The Frontend (React): Built an interactive dashboard to send a single prompt to multiple LLMs simultaneously and "race" their responses in real-time. 💡 My biggest technical takeaway: Solving the "Two Brains" Problem. When you import multiple AI starters into a Spring Boot pom.xml, Spring’s AutoConfiguration can get confused about which ChatModel bean to inject into your controllers. The solution? Using Spring's @Qualifier annotation. By explicitly naming the beans (@Qualifier("ollamaChatModel") vs @Qualifier("googleGenAiChatModel")), I was able to safely route requests to completely different AI ecosystems from within the same API. It was a great exercise in managing Maven dependencies (and fighting the occasional Maven cache bug 😅) while building a truly flexible Generative AI wrapper. What is your preferred local LLM to run right now? Let me know below! 👇 #SpringBoot #Java #ReactJS #GenerativeAI #Ollama #GoogleGemini #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
-
Most developers still debug like it’s 2015. Manually. Line-by-line. Hoping nothing slips through. Meanwhile, production finds what you missed. That’s broken. I’ve been building ClarityCode — an AI system that scans your GitHub repo and tells you what actually matters before it ships. Not just linting. Not just surface-level checks. → Bugs → Security risks → Code smells → Architecture gaps → Dependency issues All analyzed in minutes using multiple AI models — not just one. The shift is simple: From: “I think this code is fine” To: “I have data to prove it’s production-ready” Built with: Next.js 14 • React • Tailwind • Supabase multi-model AI (Groq, OpenAI, Gemini fallback) Still early. Still improving scalability + background scans. But it’s already helping: developers review faster teams ship cleaner code recruiters evaluate repos with real signals If you work with code, I want brutal feedback 👇 Try it: https://lnkd.in/dYdWhKmS GitHub: https://lnkd.in/dByrzEJj Demo: https://lnkd.in/dwwJUrMP Thanks for the tools GitHub Vercel Supabase PayPal #buildinpublic #opensource #ai #github #codequality #developers #saas #latest #newproject #opensource #Innovation #Technology #OpenSource #TailwindCSS #Cloudinary #W3Schools #JavaScript #React #Backend #frontend #latest #Alexa #shiri #NodeJS #ExpressJS #MongoDB #WebDevelopment #AI #VirtualAssistant #FullStackDeveloper #WebApp #AssistnatAgent #CreateAgent #OpenAI #MachineLearning #ArtificialIntelligence #CloudComputing #SoftwareEngineering #CodingJourney #100DaysOfCode #TechCommunity #IndieHackers
To view or add a comment, sign in
-
Syntax is a commodity, but Architecture is the differentiator—in 2026, the most successful developers aren’t just writing lines of code, they are orchestrating entire digital ecosystems where Intelligence meets Scalability. To build truly future-proof applications, I focus on the intersection of four critical pillars: crafting high-performance interfaces with React.js, embedding AI & Python for predictive logic, securing the "plumbing" via Cloud & Network Architecture, and ensuring long-term maintainability through advanced Software Logic. This "Developer’s Blueprint" ensures that every feature shipped isn't just functional, but carries real-world impact. The goal is no longer just to make it work, but to make it scale without limits. I’m curious to hear from my network: when you start a new build, do you prioritize the User Experience (Front-end) or the System Integrity (Back-end/Architecture) first? Let’s discuss here #FullStackDeveloper #AIArchitecture #CloudComputing #SoftwareEngineering #ReactJS #Python #TechInnovation #FutureOfTech #LinkedInGrowth
To view or add a comment, sign in
-
-
𝗝𝗮𝘃𝗮𝗦𝗰𝗿𝗶𝗽𝘁 𝗶𝘀 𝘀𝗶𝗻𝗴𝗹𝗲-𝘁𝗵𝗿𝗲𝗮𝗱𝗲𝗱. But your async code doesn't run in the order you think. Most developers get this wrong — including seniors. What does this print? console.log('1') setTimeout(() => console.log('2'), 0) Promise.resolve().then(() => console.log('3')) console.log('4') Take 10 seconds. Write your answer. Then keep reading. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗮𝗻𝘀𝘄𝗲𝗿: 1 → 4 → 3 → 2 Most people predict: 1 → 4 → 2 → 3 They're wrong. Here's exactly why. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗲𝘃𝗲𝗻𝘁 𝗹𝗼𝗼𝗽 𝗵𝗮𝘀 𝘁𝘄𝗼 𝗾𝘂𝗲𝘂𝗲𝘀, 𝗻𝗼𝘁 𝗼𝗻𝗲. 𝗠𝗶𝗰𝗿𝗼𝘁𝗮𝘀𝗸 𝗾𝘂𝗲𝘂𝗲 → Promises, queueMicrotask(), MutationObserver 𝗠𝗮𝗰𝗿𝗼𝘁𝗮𝘀𝗸 𝗾𝘂𝗲𝘂𝗲 → setTimeout, setInterval, I/O, UI events The rule nobody tells you: After every task, the engine drains the ENTIRE microtask queue before picking the next macrotask. Not one microtask. ALL of them. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗲𝘅𝗮𝗰𝘁 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗼𝗿𝗱𝗲𝗿: Step 1 — Run the call stack (synchronous code first) → prints '1', queues setTimeout, queues Promise, prints '4' Step 2 — Call stack is empty. Check microtask queue. → Promise.then is there → prints '3' → Microtask queue now empty. Step 3 — Now pick the next macrotask. → setTimeout callback → prints '2' setTimeout(fn, 0) does NOT mean "run immediately." It means "run after all microtasks are done." ━━━━━━━━━━━━━━━━━━━━━━━ 𝗧𝗵𝗲 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗶𝗺𝗽𝗮𝗰𝘁: This is why React state updates inside Promises resolve before a setTimeout that was queued at the same time. This is why async/await in Node.js doesn't block I/O — I/O callbacks are macrotasks, but .then() chains are microtasks that run between them. And this is the trap: If you keep creating microtasks inside microtasks, you can starve the macrotask queue permanently. setTimeout never fires. UI never updates. ━━━━━━━━━━━━━━━━━━━━━━━ 𝗡𝗼𝘄 𝘁𝗵𝗲 𝗵𝗮𝗿𝗱𝗲𝗿 𝗼𝗻𝗲: console.log('start') setTimeout(() => console.log('timeout'), 0) Promise.resolve() .then(() => { console.log('promise 1') return Promise.resolve() }) .then(() => console.log('promise 2')) console.log('end') What's the output? Drop your answer below — I'll reply with the explanation. #JavaScript #FrontendDevelopment #ReactJS #NodeJS #SoftwareEngineering #ImmediateJoiner #OpenToWork #FrontendDeveloper #React #ReactDeveloper
To view or add a comment, sign in
-
-
Just built a full-stack AI-powered developer assistant 🚀 What started as “learning Spring AI” turned into something way more real 👇 ⚡ Built with: Spring Boot + Spring AI Ollama (local + cloud models) PostgreSQL + pgvector (RAG system) React frontend 💡 What it can do: ✔ Explain complex code in seconds ✔ Debug errors like a senior dev ✔ Analyze logs intelligently ✔ Store knowledge using vector embeddings ✔ Retrieve context-aware answers (RAG) And the best part? It actually gives clean, structured, production-level output — not just generic AI responses. This project helped me understand: 👉 How real AI systems are built (not just APIs) 👉 RAG architecture (embeddings + retrieval + LLM) 👉 Scaling AI into real applications This is not just a chatbot… It’s a developer productivity engine. Would love your feedback 🙌 #SpringBoot #SpringAI #ArtificialIntelligence #Java #ReactJS #MachineLearning #GenerativeAI #Ollama #OpenSource #SoftwareDevelopment #FullStackDeveloper #BackendDeveloper #RAG #PostgreSQL #pgvector #TechProjects #Developers #Coding #AIProjects
To view or add a comment, sign in
-
𝗪𝗵𝘆 "𝗚𝗼𝗼𝗱 𝗘𝗻𝗼𝘂𝗴𝗵" 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵 𝗟𝗼𝗴𝗶𝗰 𝗜𝘀𝗻'𝘁 𝗖𝘂𝘁𝘁𝗶𝗻𝗴 𝗜𝘁 𝗔𝗻𝘆𝗺𝗼𝗿𝗲. The gap between a basic search implementation and a production-grade search engine in Node.js is widening. In a market focused on efficiency, simply hitting an endpoint and getting a 200 OK isn't the win. The real challenge and where the value lies is in how we handle data at scale. If you are working with 𝗡𝗼𝗱𝗲.𝗷𝘀 and 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵, here are three things that should be on your radar right now: If you are working with Node.js and Elasticsearch, here are three things that should be on your radar right now: 1. 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗛𝘆𝗱𝗿𝗮𝘁𝗶𝗼𝗻: Stop pulling entire documents if you only need two fields. Using _source filtering isn't just a "nice to have" it’s a requirement for reducing network overhead and memory bloat in your Node.js microservices. 2. 𝗕𝘂𝗹𝗸 𝗶𝘀 𝗞𝗶𝗻𝗴: If you’re indexing documents one by one, you’re killing your throughput. Mastering the helpers.bulk API in the official client is the difference between a sluggish sync and a high-performance pipeline. 3. 𝗖𝗶𝗿𝗰𝘂𝗶𝘁 𝗕𝗿𝗲𝗮𝗸𝗶𝗻𝗴: Node is single-threaded. One heavy, unoptimized aggs query shouldn't hang your event loop. Implementing proper timeouts and error handling on the client side is vital for resilience. The current market isn't looking for developers who can just "connect" tools. It’s looking for architects who understand the cost of a query and the weight of a cluster. What’s one optimization that changed the game for your search performance recently? #NodeJS #Elasticsearch #BackendEngineering #WebScalability #SoftwareArchitecture #Javascript #Angular
To view or add a comment, sign in
-
Software engineers who've worked with LangChain and LangGraph, what would you do differently? 👇 -------------------- In 2025 I completed Codecademy's 'Generative AI and Agents for Developers' live classroom course and I'm now building what I learned directly into Schedule my chair. The feature: a barber calls in sick, the manager clicks one button, and every affected customer gets a personalised reschedule suggestion automatically. 👨✂️✨ Here's the architectural thinking: How do I suggest the right slot? 📅 A Saturday customer isn't free on a Monday afternoon. I classify every appointment as a bank holiday, weekend, or weekday, managed entirely within the SaaS. First-timers get a like-for-like suggestion. Returning customers get pattern analysis. Does every customer need an LLM? 🤔 No and that's the key decision. If a returning customer's booking history is clear (e.g. consistently Saturday afternoons), rule-based logic handles it instantly. No LLM needed. Only ambiguous histories, where the pattern is genuinely mixed, get passed to the LLM to reason over. What does the LLM actually receive? 🤖 Only anonymised booking timestamps, no names, no contact details. It reasons over the pattern and returns a structured slot recommendation. The email is a simple template. Clean, cheap, and GDPR-conscious. How does the agent orchestrate all of this? 🔗 LangGraph drives each customer's reschedule step by step: pending → slot found → email sent → confirmed / declined. LangChain tools do the work underneath: booking fetcher, pattern analyser, slot finder. How does the customer confirm? ✅ One-click. A unique link in the email confirms the new booking. No login required. 🛠️ Tech Stack: React | TypeScript | Next.js | PostgreSQL | Prisma ORM | LangChain | LangGraph | Upstash Redis | Resend | Jest Open to new roles. If you're hiring, let's talk. 🤝
To view or add a comment, sign in
-
LangChain + Node.js | Exploring AI in Backend Today I worked on integrating LangChain with Node.js and explored how modern AI-powered backends are actually built. What I focused on: - Building LLM pipelines using LangChain - Writing and optimizing prompt templates - Dynamically editing prompts for better responses - Structuring clean backend logic for AI integration Key Insight: Good AI output depends on how well you design prompts and workflows — not just API calls. This shift in thinking is what makes AI apps truly powerful. GitHub Repository: https://lnkd.in/d7iGn893 Excited to go deeper into AI + Backend Development 🚀 --- #LangChain #NodeJS #PromptEngineering #AIBackend #LLM #OpenAI #JavaScript #BackendDevelopment #MERNStack #AIProjects #GitHubProjects #BuildInPublic #DeveloperJourney #TechGrowth #HiringDevelopers
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development