🚀 Why Scalar Functions Can Hurt Your SQL Performance! While working on backend systems and writing SQL queries, I discovered a common mistake that can silently slow down your application… 💡 Using Scalar Functions inside queries. At first, they seem very convenient — wrapping logic into a reusable function sounds like a clean solution. But here’s the problem 👇 ❌ Scalar Functions are executed row by row ❌ They prevent SQL Server from optimizing the query properly ❌ They can significantly slow down performance on large datasets 📌 Example: SELECT name, dbo.GetDiscount(price) FROM products; This looks clean… but behind the scenes, the function is executed for EACH row 😬 ✅ Better Approaches: ✔️ Use JOINs or inline logic instead ✔️ Prefer Inline Table-Valued Functions (iTVF) ✔️ Handle logic in the application layer when appropriate 🔥 Key Takeaway: "Clean code is not always fast code — always think about performance!" Have you ever faced performance issues بسبب Scalar Functions؟ 👇 #SQLServer #Backend #Java #SpringBoot #Performance #SoftwareEngineering #Database #CleanCode
Scalar Functions Hurt SQL Performance
More Relevant Posts
-
I reduced an endpoint response time by 75% without changing a single line of infrastructure. I just changed how the query reached the database. The scenario: a report with 5 JOINs, SUM and COUNT aggregations, city and date filters, running against 1 million records. Spring Data JPQL with DTO projection: 1,240ms at p95. Starting point. Native SQL with nativeQuery = true: 780ms. 37% faster just by writing raw SQL and removing the ORM translation overhead. Materialized View mapped as a read-only entity: 310ms. The endpoint became a simple SELECT with filters. The database had already done the heavy lifting before the request arrived. The lesson was straightforward: the ORM is not the villain, but it has a cost that shows up when the query gets heavy. Knowing when to move away from JPQL and when to go beyond Native SQL makes a real difference in production. Which of these approaches are you using today for reports with many JOINs? #Java #SpringBoot #Backend #SoftwareEngineering #DatabasePerformance
To view or add a comment, sign in
-
💥 Why your SQL query works in DB but fails in code? 😤 Ever faced this? 👉 Query runs perfectly in SSMS / PG Admin 👉 But fails in your application ❌ I’ve been there… and it’s frustrating 😅 🔍 The Problem: Same query… different result ✅ Common Reasons: ✔️ Parameter issue 👉 Value not passed correctly from code ✔️ Data type mismatch 👉 INT in DB but passing STRING from code ✔️ Null handling 👉 NULL in DB but not handled in code ✔️ Connection difference 👉 Different DB / schema being used ✔️ Permission issue 👉 Query works for you but not for app use ⚡ Real Example: Query works in DB: SELECT * FROM users WHERE userid = 5; But in code: cmd.Parameters.AddWithValue("@userid", ""); 👉 Result = No data ❌ ⚡ Pro Tip: Always log: 👉 Final query 👉 Parameter values 💬 Have you faced this issue before? Let’s discuss 👇 🔖 Save this post—it’ll save your debugging time! #sql #dotnet #developer #coding #debugging #tricks #trick
To view or add a comment, sign in
-
SlothDB is a super fast embedded SQL database. 😁 You point SQL at a file. Parquet, CSV, JSON, Avro, Arrow, SQLite, Excel. No server, no import step, no extension to install before you can read a Parquet file. Same embedded model as SQLite and DuckDB, different defaults. A few things we cared about while building it: It is one binary. Drop slothdb.exe somewhere, run it. It runs in the browser. The WASM build is 1.3 MB and fits Workers' 1 MB script cap in the edge variant. It is fast enough to be worth the swap for analytical work. On a 5-query warm batch over 10M rows, SlothDB finishes in 138 ms. DuckDB 1.1.5 finishes the same batch on the same hardware in 540 ms. It is also early. v0.1.8 shipped today. The Python wheel had a packaging bug last week that I only caught because a stranger filed an issue. So if you hit a rough edge, file one. We read every one. Try it in 10 seconds at https://slothdb.org or pip install slothdb. Our Github repo- https://lnkd.in/gxCSmACA #SQL #DataEngineering #DuckDB #OpenSource #OLAP
To view or add a comment, sign in
-
-
🤯 𝙋𝙧𝙚𝙥𝙖𝙧𝙚𝙙𝙎𝙩𝙖𝙩𝙚𝙢𝙚𝙣𝙩 𝙞𝙣 𝙅𝘿𝘽𝘾 — 𝙒𝙝𝙮 𝙬𝙖𝙨 𝙞𝙩 𝙞𝙣𝙩𝙧𝙤𝙙𝙪𝙘𝙚𝙙 ? We already have Statement in JDBC Then why do we need PreparedStatement? Let’s understand the real reason 👇 Problem with Statement 📃 𝐖𝐡𝐞𝐧 𝐰𝐞 𝐮𝐬𝐞 "𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭" ? Statement st = con.createStatement(); st.executeQuery("SELECT * FROM users WHERE id = " + id); ❌ Query is created every time ❌ Compilation happens again and again ❌ Vulnerable to SQL Injection ⚠️ 𝐖𝐡𝐲 𝐏𝐫𝐞𝐩𝐚𝐫𝐞𝐝𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭 𝐢𝐬 𝐢𝐧𝐭𝐫𝐨𝐝𝐮𝐜𝐞𝐝? ➡️ To solve these problems: ✔ Improve performance ✔ Prevent SQL Injection ✔ Handle dynamic values safely ⚙️ 𝐇𝐨𝐰 𝐏𝐫𝐞𝐩𝐚𝐫𝐞𝐝𝐒𝐭𝐚𝐭𝐞𝐦𝐞𝐧𝐭 𝐰𝐨𝐫𝐤𝐬 PreparedStatement ps = con.prepareStatement( "SELECT * FROM users WHERE id = ?" ); ps.setInt(1, id); ResultSet rs = ps.executeQuery(); 🔍 What happens internally? ➡️ Query is precompiled once ➡️ Only values are replaced at runtime ✔ Faster execution ✔ Safer queries 🔐 🔐 SQL Injection Prevention ❌ Statement: SELECT * FROM users WHERE id = '1 OR 1=1' Can break your database SELECT * FROM users WHERE id = '1 OR 1=1' Can break your database 📝 PreparedStatement: Treats input as data, not SQL ➡️ Prevents injection ✅ 💭 One Line Summary 🖇️PreparedStatement = Precompiled + Secure + Efficient Have you used PreparedStatement in your projects? 🤔 Or still using Statement? Let’s discuss 👇💬 #Java #JDBC #PreparedStatement #JavaDeveloper #BackendDevelopment #Programming #TechJourney #LearnBySharing #Database #InterviewPrep
To view or add a comment, sign in
-
🚨 SELECT * is silently killing your SQL queries. I've seen a 40ms query turn into a 9-second nightmare - all because of one hidden TEXT column. And the culprit? A lazy SELECT * in production. Here's what most developers don't realize: → Every unused column travels across the network on every query → LOB columns (TEXT, BLOB) silently explode your RAM usage → Databases can't optimize what they don't know you need → Schema changes break your app - often without a single error thrown In a benchmark of 10,000 rows with 22 columns: SELECT * consumed 6× more memory than explicit column lists. The fix is simple. The discipline is the hard part. Name your columns. Every. Single. Time. I put together a 7-slide breakdown covering: ✅ Why SELECT * hurts performance ✅ Real benchmark numbers ✅ The breaking changes it causes ✅ The exact fix with code examples ✅ 3 production SQL rules to live by Swipe through the doc and save it for your next code review. What's the worst SELECT * story from your production database? Drop it in the comments - I'd love to hear it. 👇 #SQL #DataAnalysis #QueryOptimization #Backend #DatabasePerformance #Programming #TechTips
To view or add a comment, sign in
-
🚀 5 SQL Tips That Improved My Backend Performance As a Backend Developer working with SQL Server, I realized that writing queries is easy… but writing optimized queries is a skill. Here are 5 SQL tips that helped me improve performance: ✔ Use SELECT only required columns (avoid SELECT *) ✔ Add proper indexes on frequently used columns ✔ Use JOINs wisely instead of nested queries ✔ Avoid unnecessary subqueries ✔ Always analyze execution plan Small improvements in queries can make a huge difference in application performance ⚡ Still learning and exploring better ways every day! 👉 What’s your favorite SQL optimization tip? #sqlserver #backenddeveloper #database #performance #optimization #dotnet #webdevelopment #coding #softwaredeveloper
To view or add a comment, sign in
-
I just published a new article in my Spring Boot learning series One common confusion in JPA is choosing between JPQL and Native SQL. So I broke it down with real examples . Topics covered : - When to use JPQL / HQL - When Native SQL is necessary - Trade-offs between portability and performance - Real Spring Boot examples Read the full article here : https://lnkd.in/d6akwNwd
To view or add a comment, sign in
-
Your query is correct. Your logic is perfect. But it’s still… 𝘀𝗹𝗼𝘄 Why? You forgot 𝗜𝗻𝗱𝗲𝘅𝗲𝘀. Let’s make this real Imagine your database table has 1 million rows. Now you run: 𝗦𝗘𝗟𝗘𝗖𝗧 * 𝗙𝗥𝗢𝗠 𝘂𝘀𝗲𝗿𝘀 𝗪𝗛𝗘𝗥𝗘 𝗲𝗺𝗮𝗶𝗹 = '𝘁𝗲𝘀𝘁@𝗴𝗺𝗮𝗶𝗹.𝗰𝗼𝗺'; Without an index: The database does a full table scan • It reads block by block from disk (I/O) • Checks each row one by one • Until it finds the match This means: 📀 More disk reads ⏳ More time 🔥 More load Basically… it scans everything. With an index: Now imagine there’s an index on email. • The database uses a 𝗕-𝗧𝗿𝗲𝗲 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 • It directly jumps to the correct location • Reads only a few I/O blocks, not all Result: ⚡ Faster lookup 📉 Less disk usage 🚀 Better performance Think of it like this: Without index = Searching a word by reading the entire book With index = Using the index page and jumping directly So which columns should you index? 𝗡𝗼𝘁 𝗲𝘃𝗲𝗿𝘆𝘁𝗵𝗶𝗻𝗴 Index the columns that: • Are frequently used in WHERE conditions • Are used in JOIN operations • Are used in ORDER BY or GROUP BY • Have high uniqueness (like email, user_id) ⚠️ But here’s the catch: Too many indexes = slower inserts & updates Because every write operation also 𝘂𝗽𝗱𝗮𝘁𝗲𝘀 𝘁𝗵𝗲 𝗶𝗻𝗱𝗲𝘅. Real insight Indexes don’t make your database faster… They make your queries 𝘀𝗺𝗮𝗿𝘁𝗲𝗿. Next time your query is slow, don’t change the logic first… Check the 𝗶𝗻𝗱𝗲𝘅. #Database #SQL #PostgreSQL #RDBMS #BackendDevelopment #PerformanceOptimization #SoftwareEngineering #Developers #Programming #CoreJava #SQLQuery #SQLScripts #Framework #SpringFramework #aswintech
To view or add a comment, sign in
-
"How do you handle slow database queries in Spring Boot?" This comes up in almost every backend interview Most developers jump straight to indexing But that is only part of the answer The real question is why is the query slow in the first place Common causes N+1 queries hitting the database repeatedly Fetching more data than needed Missing pagination on large datasets Wrong fetch type EAGER instead of LAZY Before adding indexes check these Use @Query with JOIN FETCH to avoid N+1 Select only the fields you need not the entire entity Add pagination with Pageable for large results Set fetch = FetchType LAZY and load relations only when needed Indexes help but fixing the query design helps more What database optimization has saved you the most time #Java #SpringBoot #Database #BackendDevelopment #Optimization
To view or add a comment, sign in
-
The 3 Normal Forms : ➤ 1NF - Atomicity One value per cell. ❌ Tech: "Java, SQL" ✅ Split into separate rows Rule: One value. One cell. ➤ 2NF - No Partial Dependency Non-key columns must depend on the entire primary key. ❌ Category depends only on ProductID (not OrderID + ProductID) ✅ Move Category to Product table Rule: Columns depend on the full key. ➤ 3NF - No Transitive Dependency Non-key columns cannot depend on other non-key columns. ❌ City depends on ZipCode (not StudentID) ✅ Move ZipCode data to separate table Rule: No indirect dependencies. ⤷ Why duplicate data is dangerous ⤷ How dependencies break data integrity ⤷ When to split tables logically Memorizing definitions alone can be challenging.
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development