My client's exact words: "Arsh why is it taking so long to load users?" Me internally: It's fine, server is just... thinking. It was not fine. The server was not just thinking. I was a junior dev. Fresh out of college. Simple requirement — "Show all users on the dashboard." I thought — easy. Let me just fetch the users. @app.get("/users") async def get_users (db: AsyncSession = Depends(get_db)): result =await db.execute(select(User)) users = result.scalars().all() return users Clean. Simple. Confident. Deployed it. Sent the link to the client. Client opens the dashboard. Loading... Loading... Loading... "Why so long?" What I didn't know: The database had 84,000 users. My API was fetching all 84,000 rows. Every single time. On every single page load. Sending it all to the frontend. Which was then trying to render 84,000 table rows in the browser. The browser didn't crash. I wish it had. At least that would have been obvious. Instead it just loaded. Slowly. Painfully. Forever. Then a senior dev looked at my code. He didn't say anything for a few seconds. Then — "where is your pagination?" Pagination... Pagination... Don't fetch everything. Fetch only what the user actually sees. @app.get("/users") async def get_users ( page:int=1, page_size:int=20, db: AsyncSession = Depends(get_db) ): offset =(page -1)* page_size Instead of 84,000 rows — fetch 20. Response time: 14 seconds → 180ms. The client called me the next day. "Arsh ab toh bahut fast hai!" Same server. Same database. Same code basically. Just stopped asking for everything at once. #FastAPI #PostgreSQL #Python #BackendDevelopment #SoftwareEngineering #WebDevelopment #Database #LessonsLearned
Arsh Singhal’s Post
More Relevant Posts
-
Production Bugs Will Humble You Every Time It sometimes feels like there is an unseen hand making sure bugs never stop showing up especially in production. You spend hours building and testing. Everything works fine locally. Then you deploy and suddenly, new issues appear. Earlier this year, I decided to upgrade my portfolio website from a static site to a dynamic one, especially for the blog section. I wanted to be able to publish articles, allow users to sign up, and leave comments. So I rebuilt the site using Next.js, Prisma, and PostgreSQL. Everything worked perfectly on my local machine. But after deploying on Vercel, I kept running into database connection timeouts. To be honest, I avoided debugging it for a while. Today, I finally decided to face it and I fixed it. It feels really good. Now I can publish blog posts, and users can sign up and interact with my content here https://lnkd.in/eRdz_8pb This post is not really about the blog itself. It’s about the reality that bugs will always come especially in production. That’s part of the journey. Keep going. https://lnkd.in/erZQR5uU #SoftwareEngineering #WebDevelopment #NextJS #Prisma #PostgreSQL #Vercel #Debugging #ProductionBugs #BuildInPublic #DevLife #BackendDevelopment #FullStackDeveloper #Programming #TechJourney
To view or add a comment, sign in
-
Hot take: I've seen engineers spend 2 hours configuring a Docker Compose file just to test a 10-line function. You don't need a full Postgres instance to test a user service. You don't need a Redis container to test a cache layer. You need fast, isolated, reliable tests. That's why I open-sourced @backend-master/test-utils. package- https://lnkd.in/guWmyCRe github- https://lnkd.in/gtx9GYPR Here's what it solves: ❌ Before: Spinning up databases just to test a findOne query ✅ After: MockDatabase with full CRUD in 3 lines of code ❌ Before: Manually building mock HTTP response objects ✅ After: HttpTestBuilder.ok({ userId: 1 }) — done ❌ Before: Hardcoded test data that breaks randomly ✅ After: Fixtures.user(), Fixtures.products(10) — realistic every time ❌ Before: let wasCalled = false; someService.fn = () => { wasCalled = true; } ✅ After: Spy.create() — call count, args, errors, all built-in Built with TypeScript. Zero runtime dependencies. MIT licensed. I'd love feedback from backend engineers — what testing pain point should I tackle next? What should I build next? Drop a comment 👇 → PostgreSQL adapter → GraphQL test utilities → Stream mocking → Event emitter mocks #TypeScript #BackendEngineering #SoftwareTesting #NodeJS #OpenSource #NPM #JavaScript
To view or add a comment, sign in
-
-
Day 2 - Yesterday Spring Boot. Today Node.js. Same API, different language — spot the patterns. 🚀TechFromZero Series - NodejsFromZero This isn't a Hello World. It's a real layered architecture: 📐 Request → Route → Controller → Service → Model → MySQL 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/dBXFMDT2 If anyone has a idea, improvement or recommendation please try to fork the repo and submit a pull request, Everyone is welcome to do so. 🧱 What I built (step by step): 1️⃣ Express server with health check 2️⃣ MySQL connection pool with auto-init 3️⃣ Product model with raw SQL queries 4️⃣ DTO with toDto/toEntity mapping 5️⃣ Service layer with business logic 6️⃣ Controller with HTTP request handling 7️⃣ Express Router wiring endpoints 8️⃣ Error handling + seed data 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn Node.js + Express by reading real code — with full clarity on each step. 👉 If you're a beginner learning Node.js, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. 🔥 This is Day 2 of a 50-day series. A new technology every day. Follow along! 🌐 See all days: https://lnkd.in/dhDN6Z3F #TechFromZero #Day2 #NodeJS #Express #JavaScript #REST #API #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch
To view or add a comment, sign in
-
-
Codebase three of three. The most complex one. My assignment was simple — investigate why the homepage was loading slowly. The first API call the frontend made on load was a GraphQL query for notifications. Next.js was waiting for that response before rendering anything. So I followed it. The code suggested the cache might be oversized. I disabled it to see what was underneath. What I found was a notifications table with 100 million timestamped rows being scanned in full to find notifications from the last 10 minutes. Every single time the homepage loaded. I enabled the cache again. The cache was broken. Each notification stored as a separate Redis key. A second key tracking all the other keys. To read anything you iterated through the index, then fetched each item individually. No batching. No sensible expiry. Just iteration, at scale, on every homepage load. Git blame told me this had been written by someone with five years on the same codebase. No seed data for local testing. No documentation. You had to invent scenarios by hand just to reproduce what production was doing every second. Ten nested resolvers, some calling parent or grandparent resolvers for no reason. Unnecessary list comprehensions adding queries across every query and mutation in the application. 47,000 queries → 32. I never found out whose decision it was to serve the same data simultaneously as GraphQL, REST, and XML to the same frontend. I was let go before I could ask enough questions. Apparently fixing things nobody asked you to fix is a problem in some organizations. Three codebases. Three different stacks. One pattern. Code that nobody owns drifts — until someone like me walks in. #backendengineering #graphql #django #python #softwaredevelopment #redis #caching #systemdesign #softwarearchitecture #engineeringculture
To view or add a comment, sign in
-
Tuesday Build Log: When “It Works” Isn’t the Same as “It’s Correct” Yesterday, I tried to share a huge milestone in my 20-week backend mastery journey, but the algorithm had other plans 😅. Let’s try this again! Over the weekend, I pushed my backend skills a bit further. I built a REST API that: → Takes a name input → Calls multiple external APIs (age, gender, nationality prediction) → Processes and classifies the data → Stores everything in PostgreSQL with proper structure → Exposes clean CRUD endpoints Stack: Node.js + TypeScript + Express + PostgreSQL It worked. I got the expected result. But the real story wasn’t the success, it was everything around it. Here’s what stood out: 🔹 Sometimes the bug isn’t your code I spent hours debugging what looked like a backend issue. Turns out it was just poor internet conditions slowing everything down. 🔹 “Fixing it” vs “Fixing it properly” I ran into database connection issues during deployment. I found a workaround that worked instantly… but I know it’s not the final solution. 🔹 Backend development is more than writing logic You’re dealing with infrastructure, environments, networking, and third-party systems all at once. And that’s where things get interesting. Right now, I’m learning that: → A working system doesn’t always mean a correct system → Debugging is as much about elimination as it is about knowledge → Real growth happens when things don’t behave the way you expect Still early in the journey, but things are starting to connect. If you’ve worked with backend systems in production: What’s one issue that looked like a code bug… but wasn’t? #BackendDevelopment #NodeJS #PostgreSQL #BuildInPublic #DevJourney PS: I hope this doesn’t get shadowbanned or something 😅
To view or add a comment, sign in
-
-
🚨 I thought this was a simple array problem… until binary search showed up. Day 28 of my Backend Developer Journey — and today was about 👉 combining logic + optimization 🧠 LeetCode Breakthrough Solved a problem using Binary Search + Reverse Thinking 💡 What clicked: → Reverse one array to simplify comparison → Apply upper_bound (binary search) → Maximize distance efficiently ⚡ The real trick: 👉 Don’t solve the problem as it is… 👉 Transform it into something easier 🔍 Key Insight Instead of brute force: 👉 Preprocess data (reverse array) 👉 Use binary search to reduce complexity ⚡ From O(n²) → O(n log n) 🔗 My Submission: https://lnkd.in/gF3_5BrW ☕ Spring Boot Learning 🐘 PostgreSQL + DBeaver Setup Today I stepped into real backend setup 👇 👉 Installed PostgreSQL locally 👉 Connected database using DBeaver 👉 Explored tables, queries, and DB structure ⚡ Why this matters 💡 Backend isn’t complete without DB understanding 👉 Writing code is one part 👉 Managing real data is the real game 🔥 Big Win Today ✅ Successfully connected Spring Boot to PostgreSQL ✅ Understood how applications talk to databases 🧠 The Shift 👉 Optimization comes from thinking differently 👉 Tools like DB clients make dev life easier 👉 Backend = Code + Database + Efficiency 📈 Day 28 Progress: ✅ Learned binary search application deeply ✅ Set up real database environment ✅ Took one step closer to production-level backend 💬 When did you first realize backend is more than just writing APIs? 👇 #100DaysOfCode #BackendDevelopment #SpringBoot #Java #PostgreSQL #LeetCode #CodingJourney
To view or add a comment, sign in
-
-
🚀 Day 2 learning NestJS — and things are clicking fast. Today was all about the data layer: TypeORM entities, ConfigModule, and wiring up PostgreSQL. Here's what I built and what I learned: 🗂️ Entities & Relationships Modeled three tables — Users, Products, Reviews — using TypeORM decorators. The relationship structure looks like this: → User has many Products and many Reviews → Product belongs to a User, has many Reviews → Review belongs to both a User and a Product Defining this in pure TypeScript with @OneToMany and @ManyToOne feels far more maintainable than managing foreign keys manually. ⚙️ ConfigModule Used @nestjs/config to load environment-specific .env files (.env.development, .env.production) and inject values through ConfigService. Clean separation of config from code — exactly how it should be in production systems. 🔌 TypeOrmModule.forRootAsync() This is where it all came together. Injecting ConfigService into the TypeORM factory function means the database connection is fully dynamic and environment-aware. One detail worth highlighting: the synchronize flag auto-migrates your schema in dev — incredibly convenient, but something you explicitly turn off in production. #NestJS #TypeORM #PostgreSQL #TypeScript #BackendDevelopment #NodeJS #LearningInPublic #SoftwareEngineering #WebDevelopment #Programming
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗗𝗮𝘁𝗮 𝗙𝗹𝗼𝘄 𝗖𝘆𝗰𝗹𝗲 𝗘𝘃𝗲𝗿𝘆 𝗝𝘂𝗻𝗶𝗼𝗿 𝗗𝗲𝘃 𝗡𝗲𝗲𝗱𝘀 𝘁𝗼 𝗞𝗻𝗼𝘄 As a software engineer with just 𝟮 𝘆𝗲𝗮𝗿𝘀 𝗼𝗳 𝗲𝘅𝗽𝗲𝗿𝗶𝗲𝗻𝗰𝗲, I know how confusing the full backend data flow can feel — especially when 𝗔𝗜 𝘁𝗼𝗼𝗹𝘀 hide the fundamentals. Here's a clear visual of the 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗰𝘆𝗰𝗹𝗲: 𝗖𝗹𝗶𝗲𝗻𝘁 → 𝗘𝘅𝗽𝗿𝗲𝘀𝘀 → 𝗥𝗲𝗱𝗶𝘀 → 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 𝗙𝗲𝘁𝗰𝗵 → 𝗦𝗺𝗮𝗿𝘁 𝗠𝗲𝗿𝗴𝗲 → 𝗙𝗶𝗻𝗮𝗹 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 In the 𝗔𝗜 𝗲𝗿𝗮, many juniors generate code without understanding 𝗝𝗦𝗢𝗡.𝘀𝘁𝗿𝗶𝗻𝗴𝗶𝗳𝘆(), 𝗺𝗶𝗱𝗱𝗹𝗲𝘄𝗮𝗿𝗲, 𝗣𝗿𝗼𝗺𝗶𝘀𝗲.𝗮𝗹𝗹(), and 𝘀𝗵𝗮𝗹𝗹𝗼𝘄 𝗺𝗲𝗿𝗴𝗲𝘀. This breakdown helped me a lot. Sharing it for fellow 𝗷𝘂𝗻𝗶𝗼𝗿 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀. 𝗦𝗮𝘃𝗲 𝗶𝘁. 𝗠𝗮𝘀𝘁𝗲𝗿 𝘁𝗵𝗲 𝗳𝘂𝗻𝗱𝗮𝗺𝗲𝗻𝘁𝗮𝗹𝘀 𝗳𝗶𝗿𝘀𝘁. Master the fundamentals first. #WebDevelopment #Backend #JavaScript #Redis #JuniorDevelopers
To view or add a comment, sign in
-
-
Leveling up the backend journey! After mastering core REST APIs and demystifying Spring Security with JWTs in my previous projects, it was time to push the boundaries with complex data relationships and a brand-new database. What happened: I recently built and designed "Quill" ~ a fully-fledged blogging and article publishing platform. While I had a blast designing the frontend to bring the platform to life, my true mission was under the hood: architecting a scalable backend capable of handling posts, user interactions, and media. What I learned: This project was a massive step up in complexity. I engineered the backend with Java and Spring Boot, but this time, I leveled up my tech stack: 🔹 PostgreSQL Debut: This was my first time using Postgres, and honestly? It was incredibly fun! Transitioning to it gave me a fresh perspective on database management, and I really enjoyed leveraging its robustness for this project. 🔹 Complex Data Relationships: I went deep into Spring Data JPA, mapping out complex One-to-Many and Many-to-Many relationships across Users, Posts, Comments, and Tags without compromising query performance. 🔹 Multipart File Handling: I stepped out of pure text/JSON data and implemented a custom Image Controller to securely handle, store, and serve multipart file uploads for article cover images. 🔹 Security at Scale: I successfully carried over the custom JWT authentication architecture from my previous "LetsGo" project, applying it to a much larger surface area to protect diverse endpoints, user roles, and content ownership. Key takeaway: Building "Quill" taught me that a well-structured database schema is the heartbeat of any good application. Moving to Postgres and handling complex table relations proved that when your backend architecture is solid, scaling the rest of the application feels incredibly natural. Github Link - https://lnkd.in/gxz9a7eY What was your experience like when you first switched databases, or when you first tried PostgreSQL? Let me know in the comments! 👇 #Java #SpringBoot #PostgreSQL #BackendDevelopment #DatabaseDesign #RESTAPI #ProjectBasedLearning #SoftwareEngineering #LearningInPublic #DeveloperJourney
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development