Your ORM is lying to you about performance. Every abstraction layer adds cost. Sequelize and TypeORM generate queries you never wrote and often never inspected. When your API slows down, the ORM is usually the first suspect - but most developers never look past it. Switch to pg and run raw SQL. Then use EXPLAIN ANALYZE directly from Node.js to see exactly what Postgres is doing. Here is a quick example: const { rows } = await pool.query(` EXPLAIN ANALYZE SELECT u.id, u.name, COUNT(o.id) AS order_count FROM users u JOIN orders o ON o.user_id = u.id GROUP BY u.id `); rows.forEach(row => console.log(row['QUERY PLAN'])); This gives you real execution time, seq scans, index hits - everything your ORM hides from you. Practical takeaway - run EXPLAIN ANALYZE on your five most-called endpoints this week. You will likely find at least one full table scan that a single index can eliminate. Have you ever caught a serious performance issue that your ORM was silently causing? #NodeJS #PostgreSQL #WebDevelopment #BackendEngineering #DatabasePerformance #SQLOptimization
Ditch Your ORM for Raw SQL and Boost Performance
More Relevant Posts
-
Your ORM is lying to you about performance - and your users are paying the price. In high-traffic Node.js services, ORMs add real overhead: query building, object hydration, middleware hooks. For most routes, that's fine. But for hot paths handling thousands of requests per second, it quietly kills throughput. Before assuming your bottleneck is infrastructure, measure raw SQL performance first. Here's a quick benchmark pattern using pg directly: const { rows } = await pool.query( 'SELECT id, email FROM users WHERE tenant_id = $1 LIMIT 100', [tenantId] ); No model instantiation. No eager loading you didn't ask for. Just data. Run this against your ORM equivalent under load with autocannon or k6. The throughput delta will often surprise you - sometimes 2x to 3x on complex queries with nested relations. Practical takeaway: Profile your top five highest-traffic endpoints. If the ORM layer accounts for more than 15% of response time, consider dropping to raw SQL for those specific paths only. Have you ever benchmarked your ORM versus raw queries in production - and what did you find? #nodejs #postgresql #backendperformance #webdevelopment #softwaredevelopment #nodejsdeveloper
To view or add a comment, sign in
-
Your ORM is lying to you about performance. Every time Sequelize or TypeORM "helps" you query a database, it's generating bloated SQL, running unnecessary JOINs, and adding overhead you never asked for. Handwriting SQL in Node.js isn't nostalgia - it's a measurable latency win. Here's a real example. Instead of this: const users = await User.findAll({ where: { active: true }, include: [Profile] }); Write this: const { rows } = await pool.query('SELECT u.id, u.name, p.bio FROM users u JOIN profiles p ON p.user_id = u.id WHERE u.active = true'); The second version runs one predictable query. No magic. No hidden SELECT N+1 traps. No model hydration cost. At scale, these differences compound. Teams switching from ORM-heavy codebases to raw pg or better-sqlite3 queries regularly report 20-40% reductions in average query response time. Practical takeaway - profile your slowest endpoints first, extract the ORM query, rewrite it in raw SQL, and benchmark before and after. The data will make the decision for you. Have you benchmarked ORM-generated queries against handwritten SQL in your current project? #Nodejs #WebDevelopment #BackendDevelopment #SQL #PerformanceOptimization #SoftwareEngineering
To view or add a comment, sign in
-
Adding an index and hoping for the best isn't database optimization. It's guessing. I've seen engineers spend an afternoon sprinkling indexes on a slow table, watch query time drop from 4 seconds to 3.8 seconds, and call it done. The real problem was a correlated subquery executing 100 times per request — and no index in the world fixes that. Here's the workflow that actually works: Step 1 — Run EXPLAIN ANALYZE before touching anything. Don't optimize what you haven't measured. EXPLAIN ANALYZE shows you the query execution plan: which scan type Postgres chose, how many rows it discarded, and exactly where the milliseconds went. A "Seq Scan" on a 3 million row table means Postgres is reading every single row to find the 312 you asked for. A partial index on (status) WHERE status = 'pending' turned that 2340ms sequential crawl into a 4ms index scan. Same query, same data, 585× faster. Step 2 — Hunt the N+1 problem in your ORM. This is the silent killer of backend performance. You fetch 100 users. Your ORM fires 1 query for users, then 1 query per user to fetch their orders. That's 101 database round-trips for what should be a single JOIN. Every major ORM has eager loading. Use it. User.findAll({ include: Order }) in Sequelize, select_related() in Django, with() in Laravel. One query. Done. Step 3 — Restructure before you re-index. The most impactful optimization I've seen in production wasn't an index — it was rewriting a correlated subquery into an INNER JOIN. The subquery was executing once per row in the outer query. The JOIN executes once, period. Execution time went from 3200ms to 6ms. Nothing else changed. The mental model: indexes help the database find rows faster. Query structure determines how many times it has to look. Profiling isn't optional at scale. Every engineer who touches a database query should know how to read a query plan. Save this. Profile before you optimize. ♻️ Repost if your team is still guessing instead of measuring. #DatabaseOptimization #PostgreSQL #BackendDevelopment #SQLPerformance #SoftwareEngineering #SystemDesign #QueryOptimization #N1Problem #ORM #TechLeadership #WebPerformance #ScalableArchitecture #DevTips #Programming #DataEngineering
To view or add a comment, sign in
-
-
Our API was slow, so we almost scaled everything. We had a simple GET API in a Spring Boot service handling just ~50 requests per second: Fetch orders for a certain user. Order → name OrderDetails → price, offers, discount Only ~100K records in Postgres DB and still the API took seconds. Grafana showed high latency. We were scratching our heads 😕 Everything seemed right: One-to-many relationships Clean design Code review approved Production: nice try 😏 So we did what engineers do: Thought about scaling Considered adding cache Increased DB connections and prayed 😌 But then we decided to dig deeper. And we found out where the problem really lied: 👉 N+1 queries 1 query → fetch orders N queries → fetch order details Our API kept querying the database like there was some kind of vendetta against it. And here’s how we solved it: 👉 @EntityGraph We told JPA upfront: 👉 Fetch both Order and OrderDetails in one join query The results: ❌ Hundreds of queries ✅ 1 clean JOIN query Now our API works fast. Grafana was not mad anymore. Database stopped hating us. Lesson: We didn't have a scaling problem. We had a query problem. 🔑 Takeaways: 👉 Use EntityGraph / NamedEntityGraph 👉 Use JOIN FETCH when needed 👉 Don’t trust default ORM behavior 💬 Do you have stories when you've chased the scaling issue, only to find out that it was a query problem instead? #java #springboot #hibernate #backendengineering #systemdesign #performanceoptimization #database #devlife #techhumor #engineeringhumor
To view or add a comment, sign in
-
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
156 SQL migrations and no backend server. That's what the mydba.dev architecture looks like after a year of development. Zero backend application code. Every API endpoint is a PostgreSQL function. The stack is almost absurdly simple: • React + TypeScript frontend (Vercel) • PostgREST auto-generates REST endpoints from the database schema • Clerk JWTs validated via JWKS • Row-level security handles authorization • A Go collector writes metrics directly to PostgreSQL Adding a new API endpoint means writing `CREATE FUNCTION` in a SQL migration file. Not a route handler. Not a controller class. Just SQL. 𝗪𝗵𝗮𝘁 𝘄𝗼𝗿𝗸𝘀 𝗿𝗲𝗮𝗹𝗹𝘆 𝘄𝗲𝗹𝗹: Deployment simplicity. There's no backend to deploy, scale, or monitor. The frontend ships via `git push`. Database changes ship via migration files. That's the entire deployment process. Performance is excellent. PostgREST is fast, and PostgreSQL functions with proper indexes are fast. No ORM overhead, no serialization layers, no N+1 query problems. The database IS the truth. 𝗪𝗵𝗮𝘁'𝘀 𝗴𝗲𝗻𝘂𝗶𝗻𝗲𝗹𝘆 𝗵𝗮𝗿𝗱: Debugging SQL functions is painful compared to stepping through Python or Go. Stack traces are cryptic. Testing is awkward -- you're essentially writing integration tests against a real database. Schema migrations on compressed TimescaleDB hypertables are a special kind of adventure. You can't just ALTER TABLE casually when you have columnar compression enabled. I've built patterns around it (rename tables, security-barrier views, careful migration ordering), but it's complexity that a normal backend wouldn't have. There's no middleware layer. Cross-cutting concerns like request logging, rate limiting, and input validation all need creative solutions. Some of those solutions are elegant. Some are ugly. All of them live in SQL. Would I do it again? Absolutely. But I'd invest in better migration tooling earlier. And I'd accept from day one that some things are just harder in SQL -- and that's a worthwhile tradeoff for the simplicity you get everywhere else. Anyone else running a PostgREST-only architecture in production? I'd love to compare notes. #PostgreSQL #PostgREST #BuildingInPublic #Architecture #SoftwareEngineering
To view or add a comment, sign in
-
Class 35 : #chaicode web dev cohort Lots of things learned in SQL class. Here are my learnings: - SQL vs NoSQL db - what is ORM? - creating tables - datatypes - constraints - Aggregate functions - group by clause - pattern matching - logical operators - filtering rows - sorting - pagination using limit and offset - Execution order of the query Hitesh Choudhary Piyush Garg Akash Kadlag Jay Kadlag Suraj Kumar Jha Chai Code
To view or add a comment, sign in
-
-
PostgREST might be the most underrated backend tool right now. Turn your Postgres DB into a production-ready REST API. No controllers, no ORM, no backend code. Just SQL + RLS.
To view or add a comment, sign in
-
I reduced an API response from ~3.8s to ~40ms without changing application code. The fix? One composite index. Here's the indexing strategy most backend developers skip: 𝗪𝗵𝗮𝘁 𝗮𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝘀𝗹𝗼𝘄𝘀 𝘆𝗼𝘂𝗿 𝗗𝗕: → Full table scans on WHERE clauses → Missing indexes on JOIN columns → Selecting * instead of specific columns → N+1 queries masquerading as "features" 𝗧𝗵𝗲 𝗰𝗼𝗺𝗽𝗼𝘀𝗶𝘁𝗲 𝗶𝗻𝗱𝗲𝘅 𝗿𝘂𝗹𝗲 𝗻𝗼𝗯𝗼𝗱𝘆 𝘁𝗲𝗮𝗰𝗵𝗲𝘀: Index column order matters. Put the most selective column first (highest cardinality) AND match the index order with your WHERE clause. Example: WHERE user_id = ? AND status = ? → index on (user_id, status) NOT (status, user_id) Bonus: If your query has ORDER BY, include it in the index to avoid an extra sort step. 𝗪𝗵𝗲𝗻 𝗡𝗢𝗧 𝘁𝗼 𝗶𝗻𝗱𝗲𝘅: → Small tables (often <10k rows) → Write-heavy tables → Low cardinality columns (boolean, status with few values) 𝗛𝗲𝗿𝗲’𝘀 𝗲𝘅𝗮𝗰𝘁𝗹𝘆 𝗵𝗼𝘄 𝗜 𝗱𝗲𝗯𝘂𝗴𝗴𝗲𝗱 𝗶𝘁: 1. Run EXPLAIN ANALYZE on your slowest queries 2. Look for "Seq Scan" or high "Rows Removed by Filter" 3. Add indexes strategically — not blindly Performance is free. You just have to know where to look. #Backend #Database #PostgreSQL #Performance #SQL #SoftwareEngineering
To view or add a comment, sign in
-
-
ORMs are not free. Every abstraction layer costs you query performance, and most Node.js developers never bother to measure it. Prepared statements in raw SQL give you predictable execution plans, reduced parsing overhead, and direct control over what hits your database. Here is a real example using the pg library: const result = await client.query( 'SELECT id, name FROM users WHERE status = $1 AND created_at > $2', ['active', cutoffDate] ); That single change - switching from an ORM-generated query to a prepared statement - can reduce query latency by 30-60% under load, depending on your schema complexity. The ORM is not the enemy. Blindly trusting it without profiling is. Practical takeaway: Before your next performance review, run EXPLAIN ANALYZE on your five most frequent ORM-generated queries. You will likely find at least one full table scan hiding in plain sight. Are you currently profiling your database queries in production, or are you still flying blind? #Nodejs #PostgreSQL #BackendDevelopment #WebPerformance #DatabaseOptimization #SoftwareEngineering
To view or add a comment, sign in
Explore related topics
- How to Optimize Postgresql Database Performance
- How to Improve NOSQL Database Performance
- How to Optimize SQL Server Performance
- How to Analyze Database Performance
- How to Optimize Machine Learning Performance
- How Indexing Improves Query Performance
- How to Optimize Cloud Database Performance
- How to Optimize Query Strategies
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development