Stop Struggling with Dates in SQL! ⏳ Handling dates are one of the most common tasks for a Data Engineer, but the syntax changes depending on which tool you use. The logic is the same, but the "dialect" is different. Here is how to master the 3 most important date operations across different databases. 1. Extracting a Part (Year, Month, Day) Use this when you want a specific number (like the Month) out of a date. Postgres/Snowflake: EXTRACT(MONTH FROM date) or DATE_PART('month', date) MySQL: MONTH(date) SQL Server: DATEPART(month, date) 2. Truncating (Rounding to the 1st of the month) Use this for trend analysis and grouping by month. Postgres/Snowflake: DATE_TRUNC('month', date) SQL Server: DATETRUNC(month, date) MySQL: FLOOR(date) or formatting functions. 3. Date Arithmetic (Adding/Subtracting Time) Use this to find expiry dates or "7 days ago. Postgres/Snowflake: date + INTERVAL '7 days' MySQL: DATE_ADD(date, INTERVAL 7 DAY) SQL Server: DATEADD(day, 7, date) The Cheat Sheet Table. Pro-Tip for Interviews 💡 Don’t worry about memorizing every single dialect's syntax. If you are in an interview, focus on the logic. Simply tell the interviewer: "I know I need to extract the month here; the specific function name might vary by tool, but the logic is to pull the month part. Which SQL dialect do you use most at work? Let's compare notes in the comments! 👇 #SQL #DataEngineering #PostgreSQL #MySQL #SQLServer #BigData #DataAnalytics #CodingTips The Cheat Sheet Table:
Master Date Operations Across SQL Databases
More Relevant Posts
-
🚀 Your SQL queries are SLOW — and you might not even know why. I've seen developers write perfect SQL logic… but still kill database performance. 💀 The problem isn't the query. It's the habits behind the query. Here are 6 SQL Query Optimization Techniques every data professional must know 👇 ⚡ Quick Summary: 1️⃣ Use Indexes Effectively → 90% Faster No index on WHERE column = full table scan every time. One line of index creation can change everything. 2️⃣ Avoid SELECT * → 50% Faster You don't need all 40 columns. Ask only what you need. Less I/O = faster results. 3️⃣ Use EXISTS instead of IN → 70% Faster IN evaluates every row. EXISTS stops the moment it finds a match. Smart difference. 🧠 4️⃣ Optimize JOINs with Indexed Columns → 80% Faster Joining on unindexed columns = disaster for large tables. Index your JOIN keys. Always. 5️⃣ Filter Early — WHERE before GROUP BY → 60% Faster Why group 1 million rows when a WHERE clause can reduce it to 10,000 first? 6️⃣ Avoid Functions on Indexed Columns → 85% Faster YEAR(log_date) = 2024 breaks the index. log_date >= '2024-01-01' uses it perfectly. ✅ 💡 The Real Truth: Writing SQL that works is easy. Writing SQL that performs is a skill. And in production environments with millions of rows — the difference between optimized and unoptimized SQL is the difference between 2 seconds and 2 minutes. That's the difference between a junior and a senior data professional. 🔥 🎯 Action Step for today: Open any query you wrote this week. Check — are you using SELECT *? Are you filtering before grouping? Fix one thing. Ship better code. 💪 📌 Save this post — you'll need it every time you write a complex query! ♻️ Repost to help your network write faster, cleaner SQL! 👇 Comment "OPTIMIZE" if you want the full SQL Performance Series! #SQL #SQLOptimization #QueryOptimization #DataEngineering #DatabasePerformance #DataAnalytics #SQLServer #MySQL #PostgreSQL #DataScience #TechSkills #CareerGrowth #DataAnalyst #SoftwareEngineering #BackendDevelopment #LinkedInLearning #ShankarMaheshwari #SQLTips #DataCommunity #LearnSQL
To view or add a comment, sign in
-
-
This is spot on — SQL performance is where real expertise shows. Small changes like indexing or avoiding SELECT * can make massive differences at scale. Definitely a must-know for anyone working seriously with data.
👉 Helping Professionals Learn Data Analytics | Excel • Power BI • SQL | 13+ Years in Finance & ERP | SAP | Automation Expert
🚀 Your SQL queries are SLOW — and you might not even know why. I've seen developers write perfect SQL logic… but still kill database performance. 💀 The problem isn't the query. It's the habits behind the query. Here are 6 SQL Query Optimization Techniques every data professional must know 👇 ⚡ Quick Summary: 1️⃣ Use Indexes Effectively → 90% Faster No index on WHERE column = full table scan every time. One line of index creation can change everything. 2️⃣ Avoid SELECT * → 50% Faster You don't need all 40 columns. Ask only what you need. Less I/O = faster results. 3️⃣ Use EXISTS instead of IN → 70% Faster IN evaluates every row. EXISTS stops the moment it finds a match. Smart difference. 🧠 4️⃣ Optimize JOINs with Indexed Columns → 80% Faster Joining on unindexed columns = disaster for large tables. Index your JOIN keys. Always. 5️⃣ Filter Early — WHERE before GROUP BY → 60% Faster Why group 1 million rows when a WHERE clause can reduce it to 10,000 first? 6️⃣ Avoid Functions on Indexed Columns → 85% Faster YEAR(log_date) = 2024 breaks the index. log_date >= '2024-01-01' uses it perfectly. ✅ 💡 The Real Truth: Writing SQL that works is easy. Writing SQL that performs is a skill. And in production environments with millions of rows — the difference between optimized and unoptimized SQL is the difference between 2 seconds and 2 minutes. That's the difference between a junior and a senior data professional. 🔥 🎯 Action Step for today: Open any query you wrote this week. Check — are you using SELECT *? Are you filtering before grouping? Fix one thing. Ship better code. 💪 📌 Save this post — you'll need it every time you write a complex query! ♻️ Repost to help your network write faster, cleaner SQL! 👇 Comment "OPTIMIZE" if you want the full SQL Performance Series! #SQL #SQLOptimization #QueryOptimization #DataEngineering #DatabasePerformance #DataAnalytics #SQLServer #MySQL #PostgreSQL #DataScience #TechSkills #CareerGrowth #DataAnalyst #SoftwareEngineering #BackendDevelopment #LinkedInLearning #ShankarMaheshwari #SQLTips #DataCommunity #LearnSQL
To view or add a comment, sign in
-
-
Been refactoring some messy SQL queries at work lately. Found a pattern that made everything cleaner. The general usage of sql: ------- SELECT department_id, AVG(salary) FROM ( SELECT * FROM employees WHERE hire_date > '2020-01-01' ) recent_hires GROUP BY department_id HAVING AVG(salary) > ( SELECT AVG(salary) FROM employees ); ------- It works. But reading it is hard sometimes. We can use CTEs instead: -------- WITH recent_hires AS ( SELECT * FROM employees WHERE hire_date > '2020-01-01' ), company_avg AS ( SELECT AVG(salary) AS avg_salary FROM employees ) SELECT department_id, AVG(salary) FROM recent_hires GROUP BY department_id HAVING AVG(salary) > (SELECT avg_salary FROM company_avg); -------- Same result. But now each piece has a name. We can read top to bottom. Why use CTEs: · Break big queries into small named chunks · Can reuse the same CTE multiple times · Makes code reviews easier (people actually understand what you wrote) · Recursive ones are great for org charts or nested categories ----- When to skip CTEs: · Really simple queries (don't over-engineer) · Huge intermediate results (temp table performs better) · Need indexes on the intermediate data ----- Bottom line: If your SQL has nested subqueries more than one level deep, try a CTE. Makes life easier. #SQL #CTE #Database #DataEngineering #PostgreSQL #MySQL
To view or add a comment, sign in
-
-
Day 86 – SQL JOIN (INNER, LEFT, RIGHT) Today I learned how to combine data from multiple tables using JOIN in SQL. JOIN is one of the most powerful concepts in databases because it helps us fetch related data from different tables. 🔹 What is JOIN? JOIN is used to combine rows from two or more tables based on a related column. 🔹 1️⃣ INNER JOIN INNER JOIN returns only the rows that have matching values in both tables. Example: SELECT E10.name, E10.id, E11.age FROM E10 INNER JOIN E11 ON E10.id = E11.id; ✔️ Returns only common matching records ✔️ Non-matching data will be ignored 🔹 2️⃣ LEFT JOIN LEFT JOIN returns: ✔️ All records from the left table ✔️ Matching records from the right table If no match → shows NULL Example: SELECT E12.name, E12.id, E13.age FROM E12 LEFT JOIN E13 ON E12.id = E13.id ORDER BY E12.id; ✔️ All data from left table (E12) ✔️ Non-matching rows show NULL values 🔹 3️⃣ RIGHT JOIN RIGHT JOIN is the opposite of LEFT JOIN. ✔️ All records from the right table ✔️ Matching records from the left table ✔️ Non-matching left values → NULL Example: SELECT E14.name, E14.id, E15.age FROM E14 RIGHT JOIN E15 ON E14.id = E15.id ORDER BY E15.id; 🔹 Quick Difference JOIN TypeResultINNER JOINOnly matching dataLEFT JOINAll left + matching rightRIGHT JOINAll right + matching left 🎯 Key Takeaways Today I learned: ✔️ How to combine tables using JOIN ✔️ Difference between INNER, LEFT, RIGHT JOIN ✔️ How NULL appears when no match is found ✔️ Importance of common column (id) in joins These concepts are very important when working with real-world relational databases. #SQL #MySQL #Database #BackendDevelopment #DataAnalysis #WebDevelopment
To view or add a comment, sign in
-
Mastering SQL pattern matching is key for precise data filtering. The standard `LIKE` operator provides basic string matching with wildcards like `%` and `_`, though it's important to remember its case-insensitive nature in MySQL unless `BINARY` is used. For more sophisticated data interrogation, advanced regular expressions come into play via functions and operators like `REGEXP_LIKE()`, `REGEXP`, and `RLIKE`, offering granular control with special characters such as `^`, `$`, and `.`. These tools are indispensable for developers needing to extract specific data based on complex textual patterns. Explore the full spectrum of SQL pattern matching techniques: https://lnkd.in/gqga6AtF #SQL #Database #PatternMatching #DataEngineering #DeveloperTools
To view or add a comment, sign in
-
Day 9 of my SQL Journey 🚀 Today’s challenge: The "Invalid Tweets" problem. For today's solution, I focused on an intuitive approach using String Functions in SQL. Sometimes the most effective solutions are the simplest, relying on core built-in functions to handle data validation! 🧠 My Approach: Select the tweet_id column from the Tweets table. Use the WHERE clause to filter the dataset row by row. Apply a string function like LENGTH() (or CHAR_LENGTH()) to evaluate the content column. Keep only the rows where that calculated length is strictly greater than 15. ⚡ Key Learnings & SQL Gotchas: Knowing Your Dialect: I was reminded that string length functions can vary depending on the database environment! While PostgreSQL and MySQL commonly use LENGTH(), SQL Server uses LEN(). It is always good practice to double-check the documentation for the specific SQL flavor you are using. Characters vs. Bytes: A fantastic edge case to consider in real-world applications (especially with social media data) is the difference between byte length and character length. Standard LENGTH() often counts bytes, meaning a single emoji might count as 3 or 4! Using a function like CHAR_LENGTH() is generally safer when you strictly care about the visual character count. 📌 Expected Complexity: Time: O(N) — where N is the total number of tweets. Because we are evaluating a computed function on a column for every single row, the database engine must perform a full table scan. Space: O(K) — where K is the number of invalid tweets that meet the >15 criteria, representing the memory required to output the final result set.
To view or add a comment, sign in
-
-
Here are some Essential SQL Tips for Beginners 👇👇 ◆ Primary Key = Unique Key + Not Null constraint ◆ To perform case insensitive search use UPPER() function ex. UPPER(customer_name) LIKE 'A%A' ◆ LIKE operator is for string data type ◆ COUNT(*), COUNT(1), COUNT(0) — all return the same result ◆ All aggregate functions ignore NULL values ◆ SUM and AVG work on numeric data types. MIN, MAX work on numeric, string & date types. STRING_AGG is for string data type ◆ For row level filtration use WHERE; for aggregate level filtration use HAVING ◆ UNION ALL includes duplicates; UNION excludes them ◆ If no duplicates are expected, prefer UNION ALL — it's faster! ◆ Always alias a subquery when using its columns in the outer SELECT ◆ Subqueries can be used with NOT IN condition ◆ CTEs are more readable than subqueries — performance wise both are similar ◆ Joining two tables where one has only one row? Use 1=1 as condition — that's a CROSS JOIN ◆ Window functions work at ROW level ◆ RANK() skips ranks for ties; DENSE_RANK() does not ◆ EXISTS works on true/false conditions — if the query returns at least one row, condition is TRUE and all matching records are returned 💾 Save this for your next SQL interview! ♻️ Repost to help others learn SQL faster! #SQL #SQLTips #DataAnalytics #DataAnalyst #LearnSQL #SQLForBeginners #DatabaseManagement #SQLQuery #DataEngineering #Analytics #TechEducation #DataScience #SQLServer #MySQL #PostgreSQL #CareerGrowth #LinkedInLearning #DataProfessionals #TechSkills #CodingTips
To view or add a comment, sign in
-
-
MySQL Reality Check: “More Rows” ≠ “Bigger Table” A beginner asked me recently: “Which table is biggest in the database?” Most people answer: “The one with the highest row count.” But that’s NOT always true! _____________________________________________________ Let’s test it: Check for tables with higher rowscount, SELECT table_name, table_rows FROM information_schema.tables WHERE table_schema = 'your_database' ORDER BY table_rows DESC LIMIT 5; Now check actual size, SELECT table_name, ROUND((data_length + index_length)/1024/1024, 2) AS size_mb FROM information_schema.tables WHERE table_schema = 'your_database' ORDER BY size_mb DESC LIMIT 5; _____________________________________________________ Surprise: 1. A table with **less rows** can be BIGGER 2. A table with **more rows** can be SMALLER Why? Large TEXT / BLOB columns Too many indexes Poor schema design Real-Time impact: If you optimize based only on row count: You may fix the wrong table Waste time Miss the real performance issue Lesson: In databases, “Data size tells the truth, not row count.” Follow along if you're learning backend development. #MySQL #DatabaseOptimization #SQL #BackendDevelopment #LearningInPublic #MuraliCodes
To view or add a comment, sign in
-
𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐈𝐧𝐝𝐞𝐱𝐢𝐧𝐠 – 𝐂𝐨𝐧𝐜𝐞𝐩𝐭𝐬 𝐭𝐨 𝐤𝐧𝐨𝐰 𝐛𝐞𝐟𝐨𝐫𝐞 𝐨𝐩𝐭𝐢𝐦𝐢𝐳𝐢𝐧𝐠. 🔵𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐄𝐧𝐠𝐢𝐧𝐞:The core software (like InnoDB in MySQL) that manages how data is stored, updated, and retrieved. 🔵 𝐏𝐫𝐢𝐦𝐚𝐫𝐲 𝐊𝐞𝐲: A constraint that uniquely identifies each row. It cannot be NULL and works best as an integer for fast comparisons. 🔵 𝐈𝐧𝐝𝐞𝐱:A data structure (in-memory + on-disk) that maps keys to data locations, speeding up searches by avoiding full table scans. 🟢 𝐃𝐢𝐬𝐤 / 𝐃𝐚𝐭𝐚 𝐁𝐥𝐨𝐜𝐤: The smallest storage unit on disk where rows are stored. Data is read/written in blocks, not individual rows. 🟢 𝐃𝐢𝐬𝐤 𝐈/𝐎: The process of reading data blocks into memory. Indexing reduces the number of disk reads, improving performance. 🟢𝐃𝐌𝐋 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐬 & 𝐏𝐚𝐠𝐞 𝐒𝐩𝐥𝐢𝐭𝐬: INSERT, UPDATE, DELETE can cause page splits, requiring indexes to update pointers, adding overhead. 🟡 𝐁+ 𝐓𝐫𝐞𝐞:A tree structure used for indexes. Keys are sorted, and leaf nodes store actual data pointers for fast traversal. 🟡𝐂𝐥𝐮𝐬𝐭𝐞𝐫𝐞𝐝 𝐈𝐧𝐝𝐞𝐱: Stores data rows in the order of the index key. Only one per table (usually primary key in MySQL). 🟡𝐒𝐞𝐜𝐨𝐧𝐝𝐚𝐫𝐲 𝐈𝐧𝐝𝐞𝐱:Separate structure storing primary key references to locate actual data rows. ⚪ 𝐔𝐧𝐢𝐪𝐮𝐞 𝐈𝐧𝐝𝐞𝐱: Ensures all values are unique but allows NULLs. ⚪ 𝐂𝐨𝐦𝐩𝐨𝐬𝐢𝐭𝐞 𝐈𝐧𝐝𝐞𝐱: Built on multiple columns. Works best when queried in left-to-right order. ⚪ 𝐂𝐨𝐯𝐞𝐫𝐢𝐧𝐠 𝐈𝐧𝐝𝐞𝐱:Contains all required query columns, avoiding extra disk reads. 🟠 𝐏𝐚𝐫𝐭𝐢𝐚𝐥 𝐈𝐧𝐝𝐞𝐱:Indexes only part of a column (e.g., first few characters) to save space. 🟠 𝐂𝐚𝐫𝐝𝐢𝐧𝐚𝐥𝐢𝐭𝐲:Number of unique values in a column. High cardinality = better indexing performance. #Database #DatabaseIndexing #SQL #MySQL #BackendDevelopment #SoftwareEngineering #DataStructures
To view or add a comment, sign in
-
📊 Strengthening My SQL Fundamentals – Date & Time Formatting in MySQL. Today, I explored how to work with date and time functions in MySQL, focusing on the powerful DATE_FORMAT() function to extract structured insights from datetime data. 🔍 Key Takeaways: • Extracted day, weekday, month, and year from a single datetime column • Worked with useful format specifiers like %d, %a, %m, %b, %M, %Y • Improved understanding of how formatted data enhances reporting and analysis. 💡 Why this matters: Formatting date-time data plays a crucial role in: • Building intuitive dashboards • Performing time-based analysis • Writing cleaner, more readable SQL queries Even small improvements like these contribute to writing more efficient and production-ready queries. 🚀 Consistency is key — growing one concept at a time. Baraa Khatib Salkini #SQL #MySQL #DataAnalytics #LearningInPublic #TechSkills #Database #100DaysOfCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Informative