Your ORM is LIES and your database DIES…… Prisma. Sequelize. Type ORM [Object-Relational Mapping]. Great DX. Terrible SQL. Here's why 👇 You write this: await db.user.findMany({ include: { posts: true } }) Clean, right? Under the hood, your ORM fires: → 1 query to fetch all users → 1 query per user to fetch their posts 50 users = 51 queries. 500 users = 501 queries. This is the N+1 problem and ORMs generate it silently, constantly. More ORM traps that wreck performance: → Lazy loading pulling entire relations you never use → No query batching by default → Generated SQL with unnecessary subqueries and redundant joins → Zero awareness of your index structure The scary part? Your ORM abstracts the SQL, so you never see the damage. Most devs only find this during a production incident. By then, the query has run millions of times. Raw SQL isn't always the answer. But understanding what your ORM actually generates,https://lnkd.in/dYGfeSmt is non-negotiable. Dharmops shows you the real query behind your code and tells you exactly what's wrong. No guessing. No log diving at midnight. → Diagnose your queries free="https://lnkd.in/dYGfeSmt" Are you using an ORM in prod? Which one? 👇 #ORM #Prisma #DatabaseOptimization #BackendDevelopment #NodeJS #QueryPerformance #Dharmops #SoftwareEngineering #DevTools #TechFounders
ORMs silently generate N+1 queries, hurting performance
More Relevant Posts
-
Most data analysts on my team spent more time writing SQL than actually analysing data. So I built a fix — without touching our existing Superset setup. It's called a Text-to-SQL Sidecar: a standalone FastAPI microservice that sits alongside Apache Superset and turns plain English into validated, safe SQL. You ask: "which products had the highest return rate last quarter?" It generates, validates, and executes the SQL — then hands the results back. A few things I was deliberate about: → AST-level SQL validation (not string matching — trivially bypassable) → Per-database table allowlists so the LLM can only touch what it's supposed to → Schema caching so we're not hammering the DB on every request → LLM-agnostic design — swap the endpoint URL, change the model → Reasoning traces returned alongside SQL so analysts can actually trust the output Superset never needs to know it exists. It just receives SQL. I wrote up the full implementation — architecture, code walkthrough, and the design decisions that make it production-ready. Link in the comments 👇 #DataEngineering #AI #SQL #FastAPI #ApacheSuperset #LLM #Python
To view or add a comment, sign in
-
TerSQL v0.0.2 (beta) is live — and this is where things start getting serious. 🚀 What began as a better MySQL terminal is now evolving into something bigger: 👉 A SQL interface built for *humans*, not just developers. 🧠 The problem hasn’t changed: Databases are powerful — but interacting with them is still painful. • Beginners struggle with syntax • Developers waste time debugging queries • One mistake can still break things ⚡ What’s new in v0.0.2 (beta) This update focuses on **making databases more intuitive, not just more powerful**: ✨ Natural-language style queries → Type: *“show top 5 users”* → TerSQL auto-corrects to real SQL 🧩 Modular architecture → Clean pipeline: NLP → Core → Plugin Router → DB → Designed for extensibility across multiple databases 🌐 Multi-database support → MySQL · PostgreSQL · MongoDB 🛡️ Improved safety layer → Query validation + guardrails before execution 🎯 Interactive demo + full landing page → Visualise how queries transform and execute 🧠 What makes TerSQL different? This is NOT: ❌ Another database ❌ Another GUI client It’s an **interaction layer** on top of your existing database. No migration. No complexity. Just a better way to work with data. 🔮 Where this is going TerSQL is moving toward: → AI-assisted query generation → Query explanation (human-readable) → Smarter error correction → Developer + beginner unified experience 💡 Why I’m building this I don’t think databases should feel intimidating. If you can *think it*, you should be able to *query it*. 🌐 Try it out Live: https://lnkd.in/gxbpNz5j GitHub: https://lnkd.in/g2x5sSTp If you find it interesting, a ⭐ would mean a lot. 💬 I’d love your thoughts: Would you actually use natural language for querying databases? Or do you still prefer raw SQL? #opensource #ai #sql #python #developerexperience #devtools #databases #buildinpublic #systemdesign #machinelearning #backend #programming #techinnovation
To view or add a comment, sign in
-
-
Do you know the difference between a static default and a dynamic callable in your ORM? It’s a small distinction in code that makes a massive difference in your database. 🚀 📍 Static Defaults These are defined once when the model is initialized. Every new record gets the exact same value. Use case: Setting a starting status (e.g., status='draft') or a counter starting at 0. 📍 Dynamic Defaults (Callables) These are calculated at the moment the record is created. By passing a function (like a lambda or a method), the ORM executes that logic for every single insert. Use case: Timestamps (datetime.now), UUIDs, or record-specific tokens. ⚠️ The Common Trap: One of the most frequent bugs is passing default=datetime.now() (with parentheses) instead of default=datetime.now. With (): The time is captured when the server starts. Every record will have the same timestamp until you restart the service! Without (): The ORM calls the function fresh for every new entry. Check out the infographic below for a side-by-side comparison using SQLAlchemy examples! #Python #ORM #SQLAlchemy #BackendDevelopment #CleanCode #SoftwareEngineering #Python #ORM #SQLAlchemy #Odoo #OdooDevelopment #BackendDevelopment #CleanCode #SoftwareEngineering #DatabaseDesign #ProgrammingTips #WebDevelopment #BackendEngineering #PythonDev #CodingBestPractices #ERP #FullStackDeveloper
To view or add a comment, sign in
-
-
⚡Hey .NET Friends – PostgreSQL Just Joined the FunkyORM Party... After a few decades of wrestling .NET data-access layers (and watching frameworks come and go), we at Funcular Labs still believe the best tool is the one you barely notice... the one that just works. That's why we're genuinely excited to announce full PostgreSQL support in Funcular.Data.Orm—better known around here as FunkyORM—now live in version 3.1.0. You already know the drill if you've used our ORM with SQL Server: zero-configuration POCO mapping, lambda-powered LINQ queries that generate clean parameterized SQL, remote properties that flatten your object graphs without the N+1 tax, and performance that often leaves heavier ORMs in the dust. No DbContext boilerplate. No XML mapping files. Just your plain classes and the queries you’d write anyway. Now the same delightful API works seamlessly with Postgres. Our new PostgreSqlOrmDataProvider (built on Npgsql) handles all the dialect differences transparently—double-quoted identifiers for reserved words, LIMIT/OFFSET paging, RETURNING clauses on inserts, native BOOLEAN columns... everything you’d expect. You can literally swap providers and keep your entity classes untouched (yes, even the ones with [RemoteProperty] and [RemoteKey] magic). Whether you’re starting fresh or migrating an existing FunkyORM project, the experience is identical. And because we’re still the same lightweight micro-ORM at heart, you get the speed you love without giving up type safety or developer happiness. Want to give it a spin? Head over to the NuGet package and pull in the latest bits: https://lnkd.in/gMfmJWhc We’d love for you to try it out on your next (or current) Postgres project. Drop the provider in, point it at your database, run a few queries... then swing by the comments and tell us how it feels. Did it save you time? Make your code cleaner? Surprise you in a good way? We read every note. Here’s to simpler, faster data access—no matter which database you call home. #dotnet #csharp #postgresql #orm #microorm #FuncularLabs
To view or add a comment, sign in
-
Running OSRM in production is not just docker-compose up. Here is the full pipeline — and the two things the documentation doesn't make obvious. The stack: The core is the official OSRM Docker image (https://lnkd.in/e2-U6Rz8) — the Project-OSRM team maintains it and recommends it for exactly this kind of deployment. On top of that, I built the pieces that make it production-ready: → An OpenStreetMap data pipeline using Venezuela road data from Geofabrik (~107 MB, updated daily) → A preprocessing step that builds the routing graph entirely offline → A Python HTTP client that replaced every Google Maps Distance Matrix call in my codebase End-to-end, the pipeline looks like this: Download venezuela-latest.osm.pbf from Geofabrik — free, community-maintained, updated daily osrm-extract with the car.lua routing profile — converts OSM data to OSRM's internal graph format (2–5 min for Venezuela) osrm-contract — builds the Contraction Hierarchy. This is the slow step: 20–60 minutes for Venezuela. But it runs completely offline. It never touches the request path. osrm-routed --algorithm ch — starts the HTTP API on port 5000 Our Python client calls /table with origin/destination pairs and returns a distance + duration matrix. Total infrastructure: a single t3.medium EC2 instance (~$33/month on-demand). That is the cost of replacing thousands of monthly distance matrix API calls. Two things the documentation doesn't make obvious: First — memory. Venezuela's .osm.pbf is only 107 MB, but the Contraction Hierarchy graph loaded by osrm-routed requires ~2.5–3 GB of RAM. A t3.medium (4 GB) is the minimum viable instance. Trying to run this on 2 GB will fail quietly. Second — CH vs MLD. OSRM has two preprocessing pipelines. For distance matrix use cases, the OSRM documentation explicitly recommends Contraction Hierarchies (CH) over Multi-Level Dijkstra (MLD). CH has slower preprocessing but faster query time — the right tradeoff for high-volume distance matrix workloads. That preprocessing step — osrm-contract — is where all the magic happens. It implements Contraction Hierarchies: a graph algorithm that makes it possible to answer routing queries on a full country road network in under 1ms. Next week: I'll explain the graph theory behind why this works so well at scale. Full docker-compose setup, AWS CLI deployment guide, and Python client on GitHub: 👉 [https://lnkd.in/eybfWzse] Read it before you try to deploy OSRM yourself — especially the AWS walkthrough.
To view or add a comment, sign in
-
-
Your ORM is lying to you. Silently. You write clean code. No raw SQL, no messy loops that look suspicious. You test it locally, it works perfectly. You push to production, and suddenly your endpoint is crawling. What happened? The N+1 query problem. And your ORM helped you write it without raising a single warning. Here is how it happens. You fetch a list of users that is 1 query. Then somewhere in your code, for every user you fetched, you access their orders or their profile or their posts. Your ORM quietly goes back to the database for each one. 1000 users? That is 1001 database calls for what should have been a single operation. The scary part is how innocent the code looks. In TypeORM you write user.orders inside a loop and it feels natural. In Sequelize you forget to add include to your query. In Prisma you access a relation outside of the initial include block. No errors. No warnings. Just silent, repeated database hits. And locally? You have maybe 10 users in your database. 11 queries feels fast. Nothing seems wrong. So you ship it. Production has 10,000 users. Now you have 10,001 queries firing for a single request. Your database buckles. Users complain the app is slow. You have no idea why because the code looks fine. The fix is not complicated. Most ORMs solve this with eager loading fetching related data together in one query instead of one at a time. But the more important habit is this: enable query logging in development. TypeORM has logging: true. Sequelize has a logging option you can point at your logger. Prisma has log: ['query'] in the client config. Turn it on. Watch what SQL your app is actually generating. The moment you see 1000 queries fire for one endpoint hit, you will never write that code the same way again. Your ORM is a powerful tool. But it will not protect you from yourself. You have to know what is happening underneath. The Image below inspired my post As Always, Keep your code fluid and your dreams even bigger. Together, let's code our way to greatness. Stay Liquid 💧
To view or add a comment, sign in
-
-
Why ORMs Kill Performance (If You Don’t Understand Query Generation) Most MERN devs trust ORM magic. But ORMs: Generate N+1 queries silently Create inefficient joins Hide query plans Disable index usage accidentally Example: User.findAll({ include: Orders }) This can produce: 1 query for users N queries for orders At scale, that becomes catastrophic. Production ORM strategies: Always inspect generated SQL Prefer explicit joins over lazy loading Limit selected columns (attributes) Use raw queries for hot paths Add query-level timeouts ORMs improve productivity — but only if you understand the SQL they generate.
To view or add a comment, sign in
-
📻 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 A transaction is a group of operations that either: ✅ All succeed (commit) ❌ All fail (rollback) No partial updates. 🎹 𝗧𝘆𝗽𝗲𝘀 𝗼𝗳 𝗧𝗿𝗮𝗻𝘀𝗮𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗦𝗲𝗾𝘂𝗲𝗹𝗶𝘇𝗲 1️⃣ 𝗠𝗮𝗻𝗮𝗴𝗲𝗱 (𝗥𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗲𝗱) await sequelize.transaction(async (t) => { // auto commit or rollback }); ♦️ Cleaner ♦️ Less error-prone 2️⃣ 𝗨𝗻𝗺𝗮𝗻𝗮𝗴𝗲𝗱 const t = await sequelize.transaction(); try { await User.create(data, { transaction: t }); await t.commit(); } catch (err) { await t.rollback(); } ♦️ Use when you need fine-grained control 💡 𝗪𝗵𝗶𝗰𝗵 𝗼𝗻𝗲 𝘁𝗼 𝗰𝗵𝗼𝗼𝘀𝗲 ♦️ Use Managed by default ♦️ Use Unmanaged when you really need control 👉 We’ll dive deeper into 𝗙𝗮𝘀𝘁𝗶𝗳𝘆 𝗣𝗹𝘂𝗴𝗶𝗻𝘀 in the upcoming posts. Stay tuned!! 🔔 Follow Nitin Kumar for daily valuable insights on LLD, HLD, Distributed Systems and AI. ♻️ Repost to help others in your network. #javascript #node #sequelize #sql #mysql
To view or add a comment, sign in
-
-
I used to think ORMs were the best thing ever. You define a model, write a few lines, and boom, data flows in and out like magic. No messy SQL, no headaches. Just clean, fast development. Early on, it felt perfect: – Queries were simple – Features shipped quickly – The codebase stayed neat Then the system grew. Suddenly, things felt… off. A page that used to load instantly started lagging. Queries looked simple in code, but under the hood? Not so simple anymore. Debugging performance felt like chasing a ghost. That’s when it hit me. The ORM wasn’t helping anymore, it was hiding the problem. In one project, complex queries turned the abstraction into the bottleneck. We weren’t in control, the ORM was. So we changed our approach. We didn’t throw it away. We just got smarter about it: ✔️ Use the ORM where it shines (simple operations) ✔️ Step outside it when performance actually matters ✔️ Look at real query behavior, not what the code seems to do Because at the end of the day… ORMs are just tools, not the system itself. And the real skill, Knowing when to stop relying on the magic. where do you draw the line between ORM and raw SQL? #BackendDevelopment #SoftwareArchitecture #ORM #DatabaseDesign #SQL #PerformanceOptimization #ScalableSystems #SystemDesign #CleanCode #TechDiscussion
To view or add a comment, sign in
-
-
Web scraping is straightforward—until the data stops showing up. Lately, I’ve been heads-down on my Wyalusing Bridge Impact Study. This week provided a masterclass in why data engineering is often 20% ingestion and 80% error handling. The real world rarely fits into a perfect schema on the first try. My local analysis has officially evolved into a Resilient Regional Corridor Study. I realized early on that looking at Wyalusing in isolation was too narrow. To truly understand the impact, I’ve expanded the pipeline to track the entire Route 6 Corridor, including Towanda and Tunkhannock. These towns are part of the same shifting economic ecosystem. This expansion required engineering for "data deserts" where local stations weren't reporting daily prices. Instead of letting the pipeline fail, I redesigned the system into a Hybrid Ingestion Model. I’m now utilizing Playwright for browser automation to handle JavaScript-rendered content that traditional scrapers missed. This is paired with a PostgreSQL-backed audit ledger in Supabase for manual overrides. A central orchestrator merges these sources into a single, synchronized time-series log. The methodology has also moved toward a more objective benchmark. Rather than trying to prove a price hike, I’m inspecting the "Local Premium." I calculate this by subtracting the Pennsylvania state average from our corridor’s daily prices (Local Price - State Average = Impact). This allows me to test whether logistical detours create a measurable economic delta or if the market remains stable. The infrastructure is now stable and cloud-native. Next up is orchestrating 24/7 automation with GitHub Actions. It's been an incredible deep dive into building data pipelines that are resilient. #DataEngineering #PostgreSQL #Playwright #Python #WGU #DataAnalytics #BridgeImpactStudy #BuildInPublic
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development