When a Space in Table Name Broke Production Recently, I encountered an interesting issue while working with dynamic SQL in a stored procedure. Everything was working fine until suddenly the job started failing with: "Incorrect syntax near 'A'" At first glance, the query looked perfectly valid. But the issue was not in the logic, it was hidden in the data. After debugging the dynamic SQL using PRINT statements, I found the root cause: One of the table names fetched from sys.tables had a space in it. --- Real Scenario (Using sys.tables and Cursor): DECLARE @name VARCHAR(100) DECLARE @sql VARCHAR(MAX) DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables WHERE name LIKE '%_calling' OPEN table_cursor FETCH NEXT FROM table_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = ' SELECT * FROM ' + @name + ' A ' PRINT @sql EXEC(@sql) FETCH NEXT FROM table_cursor INTO @name END CLOSE table_cursor DEALLOCATE table_cursor If sys.tables returns: My Table The query becomes: SELECT * FROM My Table A This breaks with: Incorrect syntax near 'A' --- Correct Approach: DECLARE @name VARCHAR(100) DECLARE @sql VARCHAR(MAX) DECLARE table_cursor CURSOR FOR SELECT name FROM sys.tables WHERE name LIKE '%_calling' OPEN table_cursor FETCH NEXT FROM table_cursor INTO @name WHILE @@FETCH_STATUS = 0 BEGIN SET @sql = ' SELECT * FROM ' + QUOTENAME(@name) + ' AS A ' PRINT @sql EXEC(@sql) FETCH NEXT FROM table_cursor INTO @name END CLOSE table_cursor DEALLOCATE table_cursor Now: My Table becomes [My Table] The query works correctly and is safe against spaces and special characters. --- Key Precautions When Using Dynamic SQL: 1. Always use QUOTENAME() for object names 2. Never assume system metadata is clean 3. Always PRINT dynamic SQL before execution 4. Handle spacing carefully in string concatenation 5. Prefer SCOPE_IDENTITY() over @@IDENTITY 6. Avoid cursors when possible and optimize for scale --- Key Learning: Dynamic SQL is powerful, but small issues like spaces in object names can break your entire logic. Always code defensively. #interview #java #spring #springboot #microservices #sql #production
Space in Table Name Breaks Production SQL
More Relevant Posts
-
Did standard SQL's English-like ordering accidentally force every parser into inefficiency for 50 years? That is one of the questions AUREON SQL answers. AUREON SQL is a next-generation dialect built around one principle: every object the parser constructs should be final. No placeholders. No throwaway intermediates. Here is what that looks like in the SELECT statement. 𝗥𝗲𝗹𝗮𝘁𝗶𝗼𝗻 𝗻𝗮𝘃𝗶𝗴𝗮𝘁𝗶𝗼𝗻 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗝𝗢𝗜𝗡 A colon operator replaces JOIN ... ON for every relationship type — parent-child, many-to-many, REPORT TO hierarchies, audit trail: FROM myschema.ORDER O, O:lines L, O:customer C, E:manager MGR 𝗜𝗻𝗹𝗶𝗻𝗲 𝗖𝗗𝗖 — 𝗵𝗶𝘀𝘁𝗼𝗿𝘆 𝗮𝘀 𝗮 𝗰𝗼𝗹𝘂𝗺𝗻 Historical values directly in SELECT — no subquery, no audit table: T.STATUS:1 -- most recent previous value T.STATUS:1:TS -- timestamp of that change T.STATUS:0 -- the very first value ever 𝗔𝗹𝗶𝗮𝘀𝗲𝘀 𝘂𝘀𝗮𝗯𝗹𝗲 𝗲𝘃𝗲𝗿𝘆𝘄𝗵𝗲𝗿𝗲 Name a column once in SELECT — use it in WHERE, GROUP BY, HAVING, ORDER BY, QUALIFY, and even in subsequent SELECT columns. No expression repetition. 𝗧𝘄𝗼 𝗪𝗛𝗘𝗥𝗘 𝘀𝘆𝗻𝘁𝗮𝘅𝗲𝘀, 𝗳𝗿𝗲𝗲𝗹𝘆 𝗺𝗶𝘅𝗲𝗱 Free-form SQL and the structured COL/VAL/COMP pattern can coexist in the same clause: WHERE O.AMOUNT > 100 AND IF(COL=O.REGION, VAL='EMEA', COMP=EQ) 𝗧𝗵𝗲 𝗙𝗥𝗢𝗠-𝗯𝗲𝗳𝗼𝗿𝗲-𝗦𝗘𝗟𝗘𝗖𝗧 𝗮𝗹𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝘃𝗲 Standard SQL reads SELECT before FROM, so the parser cannot resolve column references until after the table is known. That forces two-phase resolution: allocate placeholders, then discard and replace them. For 25 column refs, up to 50 intermediate objects per query. AUREON SQL offers FROM first, then SELECT. The table is known before the column list is parsed — every reference resolves directly to a final object. At 50,000 queries/sec with 15 column refs average, that removes roughly 1.5 million intermediate allocations per second. Not reduced. Absent. The standard syntax remains fully valid. The alternative is there when you want the parser to have the table context first. SQL was conceived as a language business users could write themselves. AUREON SQL honours that — and asks: what would it look like if the parser's perspective also deserved to be served? #AUREONSQL #SQL #DatabaseDesign #QueryLanguage #DataEngineering
To view or add a comment, sign in
-
Just read through a piece on advanced SQL techniques and realised something: most developers I work with treat SQL like a blunt instrument when it's actually a precision tool. Window functions, CTEs, recursive queries... these aren't fancy tricks. They're the difference between a query that takes 30 seconds and one that takes 30 minutes. I've watched junior devs write absolute nightmares of nested loops in application code when a single SQL window function would've solved it in milliseconds. The thing that gets me is how many teams treat the database like a filing cabinet. They extract everything into the application layer, mangle it with loops and conditionals, then wonder why their systems grind to a halt at scale. I built a reporting dashboard for a client last year that was crawling. Turns out they were pulling 50,000 records into memory and filtering them with C# code. Rewrote it with CTEs and window functions. Same result, 1/100th the load on the server. SQL is boring. That's exactly why it works. And it's exactly why so many developers skip past it looking for something sexier. What's the worst SQL performance issue you've inherited? DM me, I collect these. https://lnkd.in/d3nd2ZU9
To view or add a comment, sign in
-
Good morning, SQL fam! ☀️ Day 4 is complete! You're doing math in SQL now. Multiplying, dividing, calculating like a pro! You're officially halfway through Week 1. 5 days straight of showing up. That's consistency. That's growth. Keep pushing! 💪 Day 5 of 14 Days of #SQLwithFunmi 🎯 Today's focus: WORKING WITH TEXT & STRINGS 📝 🔹 Q1: Show product names in UPPERCASE and lowercase. Display original Product_Name, UPPER(Product_Name), and LOWER(Product_Name). ✅ My Solution: sql SELECT Product_Name, UPPER(Product_Name) AS Name_Uppercase, LOWER(Product_Name) AS Name_LowercaseFROM transactionsLIMIT 10; Approach: UPPER() converts everything to capitals. LOWER() converts to lowercase. Great for standardizing text! 🔹 Q2: Find all products that start with "Organic". Show Product_Name and Product_Category. ✅ My Solution: sql SELECT Product_Name, Product_CategoryFROM transactionsWHERE Product_Name LIKE 'Organic%'LIMIT 15; Approach: LIKE 'Organic%' finds any product name starting with "Organic". The % means anything can come after. 🔹 Q3: Find all products that end with "TV". Show Product_Name and Final_Amount. ✅ My Solution: sql SELECT Product_Name, Final_AmountFROM transactionsWHERE Product_Name LIKE '%TV'LIMIT 10; Approach: LIKE '%TV' finds any product name ending with "TV". The % before means anything can come before. 📘 Learnings: Today's "aha!" moment 💡: The % wildcard is like a magic wand. Put it before, after, or both to find patterns anywhere in the text! What I learned: UPPER() / LOWER() = change case LIKE = pattern matching % = matches ANY characters (zero or more) 'Organic%' = starts with Organic '%TV' = ends with TV '%phone%' = contains "phone" anywhere You're mastering text search! Now you can find anything in your data. 🎯 Day 5 done ✅ 9 to go! Innotech IT Consultancy is hosting a 14Days SQL challenge And we will be using the innotechhub app for the Challenge 📲 Download now: App Store : https://lnkd.in/e7wSGR78 Play Store : https://lnkd.in/enD3NDsp #innotechITConsultancy #SQLwithFunmi #14DaysOfSQL #LearningInPublic #DataAnalytics
To view or add a comment, sign in
-
-
𝗘𝗙 𝗖𝗼𝗿𝗲 𝟭𝟬 - 𝗧𝗼𝗽 𝟭𝟬 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀 EF Core 10 is an LTS release focused on three key areas: AI-ready data access, alignment with SQL Server 2025, and improved LINQ usability. Here is the compilation of top 10 features: 1️⃣ 𝗡𝗮𝘁𝗶𝘃𝗲 𝗩𝗲𝗰𝘁𝗼𝗿 𝗦𝗲𝗮𝗿𝗰𝗵 Store embeddings as a native SQL Server vector type and query by semantic similarity from LINQ. context.Docs .OrderBy(d => EF.Functions.VectorDistance("cosine", d.Embedding, input)) .Take(5).ToListAsync(); 👉 Supports cosine, dot product, and Euclidean distance 2️⃣ 𝗡𝗮𝘁𝗶𝘃𝗲 𝗝𝗦𝗢𝗡 𝗖𝗼𝗹𝘂𝗺𝗻 𝗦𝘂𝗽𝗽𝗼𝗿𝘁 JSON is now a first-class data type in SQL Server 2025, not text stored as JSON. modelBuilder.Entity<Blog>() .ComplexProperty(b => b.Details, c => c.ToJson()); 👉 LINQ queries on JSON fields use native JSON indexing 3️⃣ 𝗡𝗮𝘁𝗶𝘃𝗲 𝗟𝗲𝗳𝘁𝗝𝗼𝗶𝗻 / 𝗥𝗶𝗴𝗵𝘁𝗝𝗼𝗶𝗻 𝗢𝗽𝗲𝗿𝗮𝘁𝗼𝗿𝘀 First-class LINQ support for outer joins - replaces the GroupJoin + SelectMany + DefaultIfEmpty workaround. context.Customers.LeftJoin(context.Orders, c => c.Id, o => o.CustomerId, (c, o) => new { c.Name, o?.OrderDate }); 👉 Cleaner code, no workarounds 4️⃣ 𝗛𝘆𝗯𝗿𝗶𝗱 𝗦𝗲𝗮𝗿𝗰𝗵 𝘃𝗶𝗮 𝗥𝗥𝗙 Combine keyword and vector search in a single query using Reciprocal Rank Fusion. .OrderBy(x => EF.Functions.Rrf( EF.Functions.FullTextScore(x.Content, "query"), EF.Functions.VectorDistance(x.Embedding, queryVector))) 👉 One query, one round trip to the database 5️⃣ 𝗘𝘅𝗲𝗰𝘂𝘁𝗲𝗨𝗽𝗱𝗮𝘁𝗲𝗔𝘀𝘆𝗻𝗰 𝘄𝗶𝘁𝗵 𝗣𝗹𝗮𝗶𝗻 𝗟𝗮𝗺𝗯𝗱𝗮𝘀 No more expression trees for bulk updates. Write normal C# logic inside the setter. await context.Blogs.ExecuteUpdateAsync(s => { s.SetProperty(b => b.Views, 8); if (nameChanged) s.SetProperty(b => b.Name, "Updated"); }); 👉 Cleaner, more readable bulk update code 6️⃣ 𝗡𝗮𝗺𝗲𝗱 𝗜𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹 𝗤𝘂𝗲𝗿𝘆 𝗙𝗶𝗹𝘁𝗲𝗿𝘀 Global query filters can now be disabled one at a time. context.Blogs.IgnoreQueryFilters("SoftDelete").ToListAsync(); 👉 Fine-grained filter control per query 7️⃣ 𝗖𝗼𝘀𝗺𝗼𝘀 𝗗𝗕 𝗩𝗲𝗰𝘁𝗼𝗿 𝗦𝗲𝗮𝗿𝗰𝗵 - 𝗡𝗼𝘄 𝗦𝘁𝗮𝗯𝗹𝗲 Vector search in Cosmos DB is now production-ready. context.Articles .OrderBy(x => EF.Functions.VectorDistance(x.Vector, queryVector)) .Take(10).ToListAsync(); 8️⃣ 𝗖𝗼𝗺𝗽𝗹𝗲𝘅 𝗧𝘆𝗽𝗲 𝗘𝗻𝗵𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀 Complex types now support structs and can be bulk-updated when mapped to JSON columns. await context.Blogs.ExecuteUpdateAsync( s => s.SetProperty(b => b.Details.Score, b => b.Details.Score + 1)); 👉 Bulk updates on JSON-mapped complex types without loading entities 9️⃣ 𝗖𝗼𝘀𝗺𝗼𝘀 𝗗𝗕 𝗦𝗰𝗵𝗲𝗺𝗮 𝗘𝘃𝗼𝗹𝘂𝘁𝗶𝗼𝗻 Adding required properties to Cosmos DB entities no longer throws on existing documents. 🔟 𝗤𝘂𝗲𝗿𝘆 𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗶𝗼𝗻 𝗜𝗺𝗽𝗿𝗼𝘃𝗲𝗺𝗲𝗻𝘁𝘀 Several LINQ-to-SQL translation improvements ship in EF Core 10 across aggregate functions, date types, and large IN queries.
To view or add a comment, sign in
-
-
SQL: The Complete Reference - https://lnkd.in/eaz53F7i Look for "Read and Download Links" section to download. Follow me if you like this post. #SQL #Databases #DataAnalysis #DataScience #LLMs #GenAI #GenerativeAI
25K+ | Dot Net Instructor | Dot Net Full Stack Developer | .Net Core | Angular | React JS | C# WPF | SQL Server | SSIS | Azure | .Net Core MVC | JavaScript | JQuery | Git | Dapper | ADO.Net | Entity Framework
Things Every Developer Should Know — SQL Execution Order. A SQL query executes its statements in the following order: 1) FROM / JOIN 2) WHERE 3) GROUP BY 4) HAVING 5) SELECT 6) DISTINCT 7) ORDER BY 8) LIMIT / OFFSET 𝗧𝗵𝗲 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 𝘆𝗼𝘂 𝗶𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁 𝗮𝘁 𝗲𝗮𝗰𝗵 𝘀𝘁𝗲𝗽 𝗵𝗲𝗹𝗽 𝘀𝗽𝗲𝗲𝗱 𝘂𝗽 𝘁𝗵𝗲 𝗳𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴 𝘀𝘁𝗲𝗽𝘀. This is why it's important to know their execution order. 𝗧𝗼 𝗺𝗮𝘅𝗶𝗺𝗶𝘇𝗲 𝗲𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆, 𝗳𝗼𝗰𝘂𝘀 𝗼𝗻 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗶𝗻𝗴 𝘁𝗵𝗲 𝘀𝘁𝗲𝗽𝘀 𝗲𝗮𝗿𝗹𝗶𝗲𝗿 𝗶𝗻 𝘁𝗵𝗲 𝗾𝘂𝗲𝗿𝘆. With that in mind, let's take a look at some 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘁𝗶𝗽𝘀: 𝟭) 𝗠𝗮𝘅𝗶𝗺𝗶𝘇𝗲 𝘁𝗵𝗲 𝗪𝗛𝗘𝗥𝗘 𝗰𝗹𝗮𝘂𝘀𝗲 This clause is executed early, so it's a good opportunity to reduce the size of your data set before the rest of the query is processed. 𝟮) 𝗙𝗶𝗹𝘁𝗲𝗿 𝘆𝗼𝘂𝗿 𝗿𝗼𝘄𝘀 𝗯𝗲𝗳𝗼𝗿𝗲 𝗮 𝗝𝗢𝗜𝗡 Although the FROM/JOIN occurs first, you can still limit the rows. To limit the number of rows you are joining, use a subquery in the FROM statement instead of a table. 𝟯) 𝗨𝘀𝗲 𝗪𝗛𝗘𝗥𝗘 𝗼𝘃𝗲𝗿 𝗛𝗔𝗩𝗜𝗡𝗚 The HAVING clause is executed after WHERE & GROUP BY. This means you're better off moving any appropriate conditions to the WHERE clause when you can. 𝟰) 𝗗𝗼𝗻'𝘁 𝗰𝗼𝗻𝗳𝘂𝘀𝗲 𝗟𝗜𝗠𝗜𝗧, 𝗢𝗙𝗙𝗦𝗘𝗧, 𝗮𝗻𝗱 𝗗𝗜𝗦𝗧𝗜𝗡𝗖𝗧 𝗳𝗼𝗿 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 It's easy to assume that these would boost performance by minimizing the data set, but this isn’t the case. Because they occur at the end of the query, they make little to no impact on its performance. If you want to create efficient queries, it's a good idea to understand how things work under the hood otherwise your efforts may be wasted. While these tips work best in most cases, you should consider your unique use case when choosing the best course of action. Follow Sai Reddy for more such posts! SRProSkillBridge Reach out to me to attend mock interviews that will help you prepare and crack your next interview with confidence. Book 1:1- https://lnkd.in/gsrnePyD page- https://lnkd.in/gmQQGgns website- https://lnkd.in/g6afVJ_V page- https://lnkd.in/gETRGQhT gitHub- https://lnkd.in/gy6SDTDS YouTube- https://lnkd.in/gUjdagq7 Insta- https://lnkd.in/gagfpvDj WhatsApp- https://lnkd.in/gjm7naaH WhatsApp https://lnkd.in/gbR-MrVy GoogleMap- https://lnkd.in/gFetgYvg JavaScript Mastery SRProSkillBridge doc Cred: Level Up Coding
To view or add a comment, sign in
-
-
Day 09 April 23, 2026 Today I explored Alembic, and SQL Joins.. Points I noted: 1. Alembic is a database migration tool that allows us to create incremental changes in our database and track them. 2. Running : alembic revision -m "create table" creates a **********_create_table.py file inside the versions subdirectory under the 'alembic' folder. and inside this file u can add upgrade and downgrade code that specifies what to do when alembic upgrade <revision_id> is ran and alembic downgrade <revision_id> is ran on terminal. It's not just forward changes, u can trace back to any previous states using these revision ids, or even simply as: alembic upgrade +1 ( one revision up), alembic upgrade +3 ( 3 revisions up), and so on, and there are many other cool commands worth exploring.. 3. If you change the database directly through pgAdmin without going through Alembic, it will have no idea of the changes u made in pgAdmin. Your migration history and actual database will now go out of sync. 4. The basic types of join are: INNER JOIN → only matching rows from both tables LEFT OUTER JOIN → all rows from left + matches from right (null if no match) RIGHT OUTER JOIN → all rows from right + matches from left FULL JOIN → all rows from both tables 5. The reason LEFT or RIGHT JOIN matter => Imagine fetching items with their counts. If you use INNER JOIN, items with zero count vanish from your results completely. LEFT JOIN keeps them and just shows 0. ( and the same applies in case of RIGHT JOIN) 5. In SQLAlchemy, join is inner by default, so if u want to make it outer join then u need to specify it using isouter=true parameter. This is one query that I did: posts = db.query(models.Post, func.count(models.Vote.post_id). label("votes_count")).join(models.Vote, models.Vote.post_id == models.Post.id,isouter=True).group_by(models.Post.id).all() (seems messy and too complex but fundamentally it's equivalent to: select posts.*, count(votes.post_id) as vote_count from posts left join votes on posts.id = votes.post_id group by posts.id; )
To view or add a comment, sign in
-
L38 (21) sql operators: the engine behind the `where` clause. operators allow you to perform logic, math, and pattern matching to filter your data perfectly. here is the ultimate cheat sheet: > 1. arithmetic operators perform math right inside your queries. (+, -, *, /, %) select * from employees where age + 1 = 60; > 2. comparison operators use these to set exact conditions. (=, <>, !=, >, <, >=, <=) select * from employees where age > 20; > 3. logical operators chain multiple conditions together to get highly specific results. `and` — both conditions must be true. `or` — either condition can be true. `not` — reverses the condition entirely. select * from employees where age > 20 and department = 'it'; > 4. the `in` & `not in` operators cleaner than writing multiple `or` conditions. matches against a list. select * from employees where department in ('it', 'hr'); > 5. pattern matching with `like` used with wildcards to find specific string patterns. `%` = matches zero or more characters. `_` = matches exactly one single character. select * from employees where name like 'a%'; -- starts with 'a' select * from employees where name like '_a%'; -- second letter is 'a' > 6. ranges with `between` grabs data within an inclusive range. select * from employees where salary between 1200 and 1500; > 7. null checks with `is null` tip: you can NEVER use `= null` or `!= null` in sql. `null` represents an unknown value, so it cannot equal anything mathematically. you must use `is null` or `is not null`. select * from employees where department is not null; > 8. bitwise operators operate at the binary level. `&` (bitwise and), `|` (bitwise or). rarely used in typical queries, sometimes used for flags or low-level logic. operators are the backbone of every `where` clause you'll ever write. master these, and you master data filtering. #DBMS #SQL #Databases
To view or add a comment, sign in
-
-
⚙️ 𝗦𝗤𝗟 𝗦𝗲𝗿𝘃𝗲𝗿 𝗦𝗽𝗼𝗼𝗹𝗶𝗻𝗴 — 𝗵𝗲𝗹𝗽𝗳𝘂𝗹 𝗼𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻 𝗼𝗿 𝗵𝗶𝗱𝗱𝗲𝗻 𝗯𝗼𝘁𝘁𝗹𝗲𝗻𝗲𝗰𝗸? Spooling is when SQL Server temporarily stores intermediate results (often in TempDB) to avoid re-reading data. It can speed things up—but in some cases, it signals inefficiency. 🔍 𝗪𝗵𝗲𝗻 𝘆𝗼𝘂’𝗹𝗹 𝗻𝗼𝘁𝗶𝗰𝗲 𝗶𝘁 • Complex joins or repeated scans • Unexpected TempDB usage • Slower queries despite indexing ⚙️ 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗰𝗵𝗲𝗰𝗸𝘀 (𝗦𝗤𝗟 𝗦𝗲𝗿𝘃𝗲𝗿 𝟮𝟬𝟮𝟱 𝗿𝗲𝗮𝗱𝘆): 1️⃣ 𝗙𝗶𝗻𝗱 𝗵𝗶𝗴𝗵 𝗧𝗲𝗺𝗽𝗗𝗕 𝘂𝘀𝗮𝗴𝗲 (𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝗲 𝘀𝗽𝗼𝗼𝗹𝗶𝗻𝗴) SELECT session_id, user_objects_alloc_page_count, internal_objects_alloc_page_count FROM sys.dm_db_session_space_usage ORDER BY internal_objects_alloc_page_count DESC; 2️⃣ 𝗜𝗱𝗲𝗻𝘁𝗶𝗳𝘆 𝗲𝘅𝗽𝗲𝗻𝘀𝗶𝘃𝗲 𝗾𝘂𝗲𝗿𝗶𝗲𝘀 SELECT TOP 10 qs.total_logical_reads/qs.execution_count AS AvgReads, qs.total_elapsed_time/qs.execution_count AS AvgTime, qt.text FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt ORDER BY AvgReads DESC; 3️⃣ 𝗜𝗻𝘀𝗽𝗲𝗰𝘁 𝗲𝘅𝗲𝗰𝘂𝘁𝗶𝗼𝗻 𝗽𝗹𝗮𝗻𝘀 (𝗹𝗼𝗼𝗸 𝗳𝗼𝗿 𝘀𝗽𝗼𝗼𝗹 𝗼𝗽𝗲𝗿𝗮𝘁𝗼𝗿𝘀) SELECT qp.query_plan, qt.text FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt; ✅ 𝗪𝗵𝘆 𝘀𝗽𝗼𝗼𝗹𝗶𝗻𝗴 𝗲𝘅𝗶𝘀𝘁𝘀: ✔ Reduces repeated scans ✔ Improves performance in some scenarios ✔ Helps optimizer handle complex logic ⚠️ 𝗪𝗵𝗲𝗻 𝗶𝘁 𝗵𝘂𝗿𝘁𝘀: ❌ Extra TempDB I/O ❌ Higher memory usage ❌ Often indicates missing indexes or suboptimal joins 💡 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝗶𝗻𝘀𝗶𝗴𝗵𝘁: 👉 Spooling isn’t bad—but if you see it often, it’s a signal to review indexing and query design. 𝗜𝗳 𝘁𝗵𝗶𝘀 𝗵𝗲𝗹𝗽𝗲𝗱, 𝗹𝗶𝗸𝗲, 𝘀𝗵𝗮𝗿𝗲, 𝗼𝗿 𝗿𝗲𝗽𝗼𝘀𝘁 👍 𝗡𝗲𝗲𝗱 𝗵𝗲𝗹𝗽 𝘄𝗶𝘁𝗵 𝗦𝗤𝗟 𝗦𝗲𝗿𝘃𝗲𝗿, 𝗪𝗲𝗯, 𝗼𝗿 𝗠𝗼𝗯𝗶𝗹𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁? 𝗗𝗠 𝗺𝗲 𝗮𝗻𝘆𝘁𝗶𝗺𝗲. #SQLServer #DatabaseOptimization #PerformanceTuning #QueryOptimization #DataEngineering #SQLPerformance #BackendDevelopment #CloudComputing #Scalability #TechTips
To view or add a comment, sign in
-
-
Mastering SQL pattern matching is key for precise data filtering. The standard `LIKE` operator provides basic string matching with wildcards like `%` and `_`, though it's important to remember its case-insensitive nature in MySQL unless `BINARY` is used. For more sophisticated data interrogation, advanced regular expressions come into play via functions and operators like `REGEXP_LIKE()`, `REGEXP`, and `RLIKE`, offering granular control with special characters such as `^`, `$`, and `.`. These tools are indispensable for developers needing to extract specific data based on complex textual patterns. Explore the full spectrum of SQL pattern matching techniques: https://lnkd.in/gqga6AtF #SQL #Database #PatternMatching #DataEngineering #DeveloperTools
To view or add a comment, sign in
-
🔍 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝗶𝗻𝗱𝗲𝘅𝗲𝘀: 𝘁𝗵𝗲 𝘀𝗶𝗺𝗽𝗹𝗲𝘀𝘁 𝗳𝗶𝘅 𝘁𝗵𝗮𝘁 𝗼𝗳𝘁𝗲𝗻 𝗱𝗲𝗹𝗶𝘃𝗲𝗿𝘀 𝘁𝗵𝗲 𝗯𝗶𝗴𝗴𝗲𝘀𝘁 𝗴𝗮𝗶𝗻𝘀 Slow queries aren’t always about rewriting SQL. Sometimes the engine is telling you exactly what it needs—the right index. 🔧 𝗙𝗶𝗻𝗱 𝗵𝗶𝗴𝗵-𝗶𝗺𝗽𝗮𝗰𝘁 𝗺𝗶𝘀𝘀𝗶𝗻𝗴 𝗶𝗻𝗱𝗲𝘅𝗲𝘀 (𝗦𝗤𝗟 𝗦𝗲𝗿𝘃𝗲𝗿 𝟮𝟬𝟮𝟱): 1️⃣ 𝗧𝗼𝗽 𝗿𝗲𝗰𝗼𝗺𝗺𝗲𝗻𝗱𝗮𝘁𝗶𝗼𝗻𝘀 𝗯𝘆 𝗶𝗺𝗽𝗮𝗰𝘁 SELECT TOP 10 migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) AS Impact, DB_NAME(mid.database_id) AS DBName, mid.statement AS TableName, mid.equality_columns, mid.inequality_columns, mid.included_columns FROM sys.dm_db_missing_index_group_stats migs JOIN sys.dm_db_missing_index_groups mig ON migs.group_handle = mig.index_group_handle JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle ORDER BY Impact DESC; 2️⃣ 𝗦𝗲𝗲 𝗾𝘂𝗲𝗿𝗶𝗲𝘀 𝗱𝗿𝗶𝘃𝗶𝗻𝗴 𝘁𝗵𝗲 𝗻𝗲𝗲𝗱 SELECT TOP 10 qs.execution_count, qs.total_logical_reads/qs.execution_count AS AvgReads, qt.text FROM sys.dm_exec_query_stats qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt ORDER BY AvgReads DESC; 3️⃣ 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗲 𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗶𝗻𝗱𝗲𝘅 𝘂𝘀𝗮𝗴𝗲 SELECT OBJECT_NAME(object_id) AS TableName, user_seeks, user_scans, user_lookups, user_updates FROM sys.dm_db_index_usage_stats WHERE database_id = DB_ID() ORDER BY user_scans DESC; ✅ 𝗪𝗵𝘆 𝗶𝘁 𝗵𝗲𝗹𝗽𝘀: ✔ Faster reads and joins ✔ Reduced CPU and I/O ✔ Better overall query plans ⚠️ 𝗧𝗿𝗮𝗱𝗲-𝗼𝗳𝗳𝘀: ❌ Extra storage and maintenance ❌ Slower writes if over-indexed ❌ DMV suggestions are a starting point—not final design 💡 𝗣𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘁𝗶𝗽: 👉 Combine recommendations, avoid duplicates, and add only what your workload truly needs. 𝗜𝗳 𝘁𝗵𝗶𝘀 𝗵𝗲𝗹𝗽𝗲𝗱, 𝗹𝗶𝗸𝗲, 𝘀𝗵𝗮𝗿𝗲, 𝗼𝗿 𝗿𝗲𝗽𝗼𝘀𝘁 👍 𝗡𝗲𝗲𝗱 𝗵𝗲𝗹𝗽 𝘄𝗶𝘁𝗵 𝗦𝗤𝗟 𝗦𝗲𝗿𝘃𝗲𝗿, 𝗪𝗲𝗯, 𝗼𝗿 𝗠𝗼𝗯𝗶𝗹𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁? 𝗗𝗠 𝗺𝗲 𝗮𝗻𝘆𝘁𝗶𝗺𝗲. #SQLServer #DatabaseOptimization #PerformanceTuning #QueryOptimization #DataEngineering #SQLPerformance #Indexing #BackendDevelopment #Scalability #TechTips
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development