SQL Optimization for MRRS Project

SQL Optimization from "Theoretical" to "Performant" | Tuning the SQL Table for our application (MRRS) We’ve all heard that indices are the answer to slow SQL performance. But as anyone working on enterprise-scale data knows: it’s never that simple. In my recent work on the MRRS project for Network Rail, I ran into a classic bottleneck where standard indexing wasn’t just "not helping", it was actually adding overhead to our ETL cycles. When you're dealing with the scale and precision required for rail infrastructure data, "good enough" isn't an option. I took a deep dive into the execution plans to move beyond the basics. Here’s what worked for us: The Practical Approach: 1. SARGability Over Everything: We found several legacy filters wrapping columns in functions (like YEAR() or TRIM()), which completely blinded our indices. Refactoring these to be SARG-compliant dropped our execution time significantly. 2. Index Pruning: We realized we had "index bloat"—too many overlapping indices that were slowing down our INSERT and UPDATE operations during the daily data refresh. By consolidating into a few high-impact composite indices, we improved write throughput 3. Covering Indices: Instead of just indexing the WHERE clause, we included the SELECT columns in the index itself. This allowed the engine to fulfill the query entirely from the index without ever touching the heavy base tables The Result? By moving away from "blanket indexing" and focusing on precision tuning, we achieved a much leaner, more resilient data pipeline that can handle the complexities of the MRRS project without breaking a sweat. The Lesson: Don't just add an index because it’s there. Read the execution plan, understand the write-penalty, and optimize for the specific data shape you’re working with. #DataEngineering #SQLOptimization #AzureDataFactory #MRRS #DatabaseTuning #SeniorDataEngineer

To view or add a comment, sign in

Explore content categories