Day 86 – SQL JOIN (INNER, LEFT, RIGHT) Today I learned how to combine data from multiple tables using JOIN in SQL. JOIN is one of the most powerful concepts in databases because it helps us fetch related data from different tables. 🔹 What is JOIN? JOIN is used to combine rows from two or more tables based on a related column. 🔹 1️⃣ INNER JOIN INNER JOIN returns only the rows that have matching values in both tables. Example: SELECT E10.name, E10.id, E11.age FROM E10 INNER JOIN E11 ON E10.id = E11.id; ✔️ Returns only common matching records ✔️ Non-matching data will be ignored 🔹 2️⃣ LEFT JOIN LEFT JOIN returns: ✔️ All records from the left table ✔️ Matching records from the right table If no match → shows NULL Example: SELECT E12.name, E12.id, E13.age FROM E12 LEFT JOIN E13 ON E12.id = E13.id ORDER BY E12.id; ✔️ All data from left table (E12) ✔️ Non-matching rows show NULL values 🔹 3️⃣ RIGHT JOIN RIGHT JOIN is the opposite of LEFT JOIN. ✔️ All records from the right table ✔️ Matching records from the left table ✔️ Non-matching left values → NULL Example: SELECT E14.name, E14.id, E15.age FROM E14 RIGHT JOIN E15 ON E14.id = E15.id ORDER BY E15.id; 🔹 Quick Difference JOIN TypeResultINNER JOINOnly matching dataLEFT JOINAll left + matching rightRIGHT JOINAll right + matching left 🎯 Key Takeaways Today I learned: ✔️ How to combine tables using JOIN ✔️ Difference between INNER, LEFT, RIGHT JOIN ✔️ How NULL appears when no match is found ✔️ Importance of common column (id) in joins These concepts are very important when working with real-world relational databases. #SQL #MySQL #Database #BackendDevelopment #DataAnalysis #WebDevelopment
Naveen T’s Post
More Relevant Posts
-
📊 Strengthening My SQL Fundamentals – Date & Time Formatting in MySQL. Today, I explored how to work with date and time functions in MySQL, focusing on the powerful DATE_FORMAT() function to extract structured insights from datetime data. 🔍 Key Takeaways: • Extracted day, weekday, month, and year from a single datetime column • Worked with useful format specifiers like %d, %a, %m, %b, %M, %Y • Improved understanding of how formatted data enhances reporting and analysis. 💡 Why this matters: Formatting date-time data plays a crucial role in: • Building intuitive dashboards • Performing time-based analysis • Writing cleaner, more readable SQL queries Even small improvements like these contribute to writing more efficient and production-ready queries. 🚀 Consistency is key — growing one concept at a time. Baraa Khatib Salkini #SQL #MySQL #DataAnalytics #LearningInPublic #TechSkills #Database #100DaysOfCode
To view or add a comment, sign in
-
-
Here are some Essential SQL Tips for Beginners 👇👇 ◆ Primary Key = Unique Key + Not Null constraint ◆ To perform case insensitive search use UPPER() function ex. UPPER(customer_name) LIKE 'A%A' ◆ LIKE operator is for string data type ◆ COUNT(*), COUNT(1), COUNT(0) — all return the same result ◆ All aggregate functions ignore NULL values ◆ SUM and AVG work on numeric data types. MIN, MAX work on numeric, string & date types. STRING_AGG is for string data type ◆ For row level filtration use WHERE; for aggregate level filtration use HAVING ◆ UNION ALL includes duplicates; UNION excludes them ◆ If no duplicates are expected, prefer UNION ALL — it's faster! ◆ Always alias a subquery when using its columns in the outer SELECT ◆ Subqueries can be used with NOT IN condition ◆ CTEs are more readable than subqueries — performance wise both are similar ◆ Joining two tables where one has only one row? Use 1=1 as condition — that's a CROSS JOIN ◆ Window functions work at ROW level ◆ RANK() skips ranks for ties; DENSE_RANK() does not ◆ EXISTS works on true/false conditions — if the query returns at least one row, condition is TRUE and all matching records are returned 💾 Save this for your next SQL interview! ♻️ Repost to help others learn SQL faster! #SQL #SQLTips #DataAnalytics #DataAnalyst #LearnSQL #SQLForBeginners #DatabaseManagement #SQLQuery #DataEngineering #Analytics #TechEducation #DataScience #SQLServer #MySQL #PostgreSQL #CareerGrowth #LinkedInLearning #DataProfessionals #TechSkills #CodingTips
To view or add a comment, sign in
-
-
Stop Struggling with Dates in SQL! ⏳ Handling dates are one of the most common tasks for a Data Engineer, but the syntax changes depending on which tool you use. The logic is the same, but the "dialect" is different. Here is how to master the 3 most important date operations across different databases. 1. Extracting a Part (Year, Month, Day) Use this when you want a specific number (like the Month) out of a date. Postgres/Snowflake: EXTRACT(MONTH FROM date) or DATE_PART('month', date) MySQL: MONTH(date) SQL Server: DATEPART(month, date) 2. Truncating (Rounding to the 1st of the month) Use this for trend analysis and grouping by month. Postgres/Snowflake: DATE_TRUNC('month', date) SQL Server: DATETRUNC(month, date) MySQL: FLOOR(date) or formatting functions. 3. Date Arithmetic (Adding/Subtracting Time) Use this to find expiry dates or "7 days ago. Postgres/Snowflake: date + INTERVAL '7 days' MySQL: DATE_ADD(date, INTERVAL 7 DAY) SQL Server: DATEADD(day, 7, date) The Cheat Sheet Table. Pro-Tip for Interviews 💡 Don’t worry about memorizing every single dialect's syntax. If you are in an interview, focus on the logic. Simply tell the interviewer: "I know I need to extract the month here; the specific function name might vary by tool, but the logic is to pull the month part. Which SQL dialect do you use most at work? Let's compare notes in the comments! 👇 #SQL #DataEngineering #PostgreSQL #MySQL #SQLServer #BigData #DataAnalytics #CodingTips The Cheat Sheet Table:
To view or add a comment, sign in
-
-
Been refactoring some messy SQL queries at work lately. Found a pattern that made everything cleaner. The general usage of sql: ------- SELECT department_id, AVG(salary) FROM ( SELECT * FROM employees WHERE hire_date > '2020-01-01' ) recent_hires GROUP BY department_id HAVING AVG(salary) > ( SELECT AVG(salary) FROM employees ); ------- It works. But reading it is hard sometimes. We can use CTEs instead: -------- WITH recent_hires AS ( SELECT * FROM employees WHERE hire_date > '2020-01-01' ), company_avg AS ( SELECT AVG(salary) AS avg_salary FROM employees ) SELECT department_id, AVG(salary) FROM recent_hires GROUP BY department_id HAVING AVG(salary) > (SELECT avg_salary FROM company_avg); -------- Same result. But now each piece has a name. We can read top to bottom. Why use CTEs: · Break big queries into small named chunks · Can reuse the same CTE multiple times · Makes code reviews easier (people actually understand what you wrote) · Recursive ones are great for org charts or nested categories ----- When to skip CTEs: · Really simple queries (don't over-engineer) · Huge intermediate results (temp table performs better) · Need indexes on the intermediate data ----- Bottom line: If your SQL has nested subqueries more than one level deep, try a CTE. Makes life easier. #SQL #CTE #Database #DataEngineering #PostgreSQL #MySQL
To view or add a comment, sign in
-
-
🚀 **Understanding `RANK()` vs `DENSE_RANK()` in MySQL (Made Simple!)** If you're working with SQL analytics, you've probably come across `RANK()` and `DENSE_RANK()`. They look similar—but behave differently when there are ties. Let’s break it down 👇 --- 🔢 **Sample Data (Scores Table)** Name Score A 100 B 90 C 90 D 80 --- 🥇 **Using `RANK()`** SELECT Name, Score, RANK() OVER (ORDER BY Score DESC) AS rank FROM scores; 👉 Output: A 100 1 B 90 2 C 90 2 D 80 4 📌 Notice: After a tie, ranks are **skipped** (no rank 3). --- 🥈 **Using `DENSE_RANK()`** SELECT Name, Score, DENSE_RANK() OVER (ORDER BY Score DESC) AS dense_rank FROM scores; 👉 Output: A 100 1 B 90 2 C 90 2 D 80 3 📌 Notice: No gaps—ranking stays **continuous**. --- ⚡ **Key Difference** * `RANK()` → Skips ranks after ties * `DENSE_RANK()` → No skipped ranks --- 💡 **When to Use What?** * Use `RANK()` when position gaps matter (e.g., competition ranking 🏆) * Use `DENSE_RANK()` when you want continuous ranking (e.g., grouping tiers 📊) --- #MySQL #SQL #DataAnalytics #Database #Learning #TechTips
To view or add a comment, sign in
-
🚀 Your SQL queries are SLOW — and you might not even know why. I've seen developers write perfect SQL logic… but still kill database performance. 💀 The problem isn't the query. It's the habits behind the query. Here are 6 SQL Query Optimization Techniques every data professional must know 👇 ⚡ Quick Summary: 1️⃣ Use Indexes Effectively → 90% Faster No index on WHERE column = full table scan every time. One line of index creation can change everything. 2️⃣ Avoid SELECT * → 50% Faster You don't need all 40 columns. Ask only what you need. Less I/O = faster results. 3️⃣ Use EXISTS instead of IN → 70% Faster IN evaluates every row. EXISTS stops the moment it finds a match. Smart difference. 🧠 4️⃣ Optimize JOINs with Indexed Columns → 80% Faster Joining on unindexed columns = disaster for large tables. Index your JOIN keys. Always. 5️⃣ Filter Early — WHERE before GROUP BY → 60% Faster Why group 1 million rows when a WHERE clause can reduce it to 10,000 first? 6️⃣ Avoid Functions on Indexed Columns → 85% Faster YEAR(log_date) = 2024 breaks the index. log_date >= '2024-01-01' uses it perfectly. ✅ 💡 The Real Truth: Writing SQL that works is easy. Writing SQL that performs is a skill. And in production environments with millions of rows — the difference between optimized and unoptimized SQL is the difference between 2 seconds and 2 minutes. That's the difference between a junior and a senior data professional. 🔥 🎯 Action Step for today: Open any query you wrote this week. Check — are you using SELECT *? Are you filtering before grouping? Fix one thing. Ship better code. 💪 📌 Save this post — you'll need it every time you write a complex query! ♻️ Repost to help your network write faster, cleaner SQL! 👇 Comment "OPTIMIZE" if you want the full SQL Performance Series! #SQL #SQLOptimization #QueryOptimization #DataEngineering #DatabasePerformance #DataAnalytics #SQLServer #MySQL #PostgreSQL #DataScience #TechSkills #CareerGrowth #DataAnalyst #SoftwareEngineering #BackendDevelopment #LinkedInLearning #ShankarMaheshwari #SQLTips #DataCommunity #LearnSQL
To view or add a comment, sign in
-
-
This is spot on — SQL performance is where real expertise shows. Small changes like indexing or avoiding SELECT * can make massive differences at scale. Definitely a must-know for anyone working seriously with data.
👉 Helping Professionals Learn Data Analytics | Excel • Power BI • SQL | 13+ Years in Finance & ERP | SAP | Automation Expert
🚀 Your SQL queries are SLOW — and you might not even know why. I've seen developers write perfect SQL logic… but still kill database performance. 💀 The problem isn't the query. It's the habits behind the query. Here are 6 SQL Query Optimization Techniques every data professional must know 👇 ⚡ Quick Summary: 1️⃣ Use Indexes Effectively → 90% Faster No index on WHERE column = full table scan every time. One line of index creation can change everything. 2️⃣ Avoid SELECT * → 50% Faster You don't need all 40 columns. Ask only what you need. Less I/O = faster results. 3️⃣ Use EXISTS instead of IN → 70% Faster IN evaluates every row. EXISTS stops the moment it finds a match. Smart difference. 🧠 4️⃣ Optimize JOINs with Indexed Columns → 80% Faster Joining on unindexed columns = disaster for large tables. Index your JOIN keys. Always. 5️⃣ Filter Early — WHERE before GROUP BY → 60% Faster Why group 1 million rows when a WHERE clause can reduce it to 10,000 first? 6️⃣ Avoid Functions on Indexed Columns → 85% Faster YEAR(log_date) = 2024 breaks the index. log_date >= '2024-01-01' uses it perfectly. ✅ 💡 The Real Truth: Writing SQL that works is easy. Writing SQL that performs is a skill. And in production environments with millions of rows — the difference between optimized and unoptimized SQL is the difference between 2 seconds and 2 minutes. That's the difference between a junior and a senior data professional. 🔥 🎯 Action Step for today: Open any query you wrote this week. Check — are you using SELECT *? Are you filtering before grouping? Fix one thing. Ship better code. 💪 📌 Save this post — you'll need it every time you write a complex query! ♻️ Repost to help your network write faster, cleaner SQL! 👇 Comment "OPTIMIZE" if you want the full SQL Performance Series! #SQL #SQLOptimization #QueryOptimization #DataEngineering #DatabasePerformance #DataAnalytics #SQLServer #MySQL #PostgreSQL #DataScience #TechSkills #CareerGrowth #DataAnalyst #SoftwareEngineering #BackendDevelopment #LinkedInLearning #ShankarMaheshwari #SQLTips #DataCommunity #LearnSQL
To view or add a comment, sign in
-
-
Most SQL developers write queries top to bottom. SQL doesn't run them that way. This one gap causes more bugs, more confusion, and more slow queries than almost anything else. Here's the actual order SQL executes: • FROM — load the table first • JOIN — combine the tables • WHERE — filter the rows • GROUP BY — group what's left • HAVING — filter the groups • SELECT — NOW it picks your columns • ORDER BY — sort the final result SELECT runs sixth. Not first. This is why you can't use a column alias from your SELECT in your WHERE clause — WHERE runs before SELECT even decides what the columns are called. This is why filtering in WHERE is always faster than filtering in HAVING — WHERE cuts rows before grouping, HAVING cuts after. This is why SELECT * on a large table is expensive even if you only need 2 columns — FROM scans everything before SELECT can trim it. Three rules that will save you hours: → Filter as early as possible — always in WHERE, never in HAVING unless you need it → Never reference SELECT aliases in WHERE or GROUP BY → Subqueries in FROM run first — use them to pre-filter large tables before joining Every SQL bug I've ever fixed started with forgetting this. Save this. Share it with every SQL writer on your team. Did you know this already — or did this just explain a bug you've had? 👇 #SQL #DataEngineering #Azure #Databricks #DataEngineer
To view or add a comment, sign in
-
-
🚀 Day 32/100 — SQL Subqueries: Thinking Inside Queries 🧠💻 Today I learned Subqueries, a powerful concept in SQL used to solve complex problems step by step. 📊 What is a Subquery? 👉 A query inside another query ➡️ Used to break down complex problems into simpler parts 📌 What I explored today: 🔹 Subqueries in SELECT 🔹 Subqueries in WHERE 🔹 Subqueries in FROM 🔹 Nested queries for filtering 💻 Example Scenario: 👉 Find customers who made orders above the average order value 📌 Example Query: SELECT customer_id, order_amount FROM orders WHERE order_amount > ( SELECT AVG(order_amount) FROM orders ); 📊 How it works: 👉 Inner query → calculates average 👉 Outer query → filters higher-than-average orders 🔥 Key Learnings: 💡 Subqueries help solve complex business questions 💡 Makes SQL more flexible and powerful 💡 Commonly asked in interviews 🚀 Real-world use cases: ✔ Filtering based on averages ✔ Comparing values within datasets ✔ Dynamic data selection 🔥 Pro Tip: 👉 Use subqueries when: You need step-by-step filtering OR when JOINs become complex 📊 Tools Used: SQL | MySQL ✅ Day 32 complete. 👉 Quick question: Do you prefer solving problems using JOINs or Subqueries? 🤔 #Day32 #100DaysOfData #SQL #Subqueries #DataAnalytics #LearningInPublic #CareerGrowth #JobReady #InterviewPrep
To view or add a comment, sign in
-
-
🚀 Another SQL problem solved — and this one was a great exercise in aggregation + date handling! **Problem:** For each user, find the number of days between their first and last post in 2021 — but only include users who posted at least twice. **My Approach:** * Filter data for the year 2021 * Group by `user_id` * Use `MIN()` and `MAX()` to get first & last post dates * Subtract dates to get the duration * Use `HAVING` to ensure at least 2 posts **Final Query (PostgreSQL):** SELECT user_id, MAX(post_date)::date - MIN(post_date)::date AS days_between FROM posts WHERE EXTRACT(YEAR FROM post_date) = 2021 GROUP BY user_id HAVING COUNT(*) >= 2; **Key Learnings 💡** * `MAX(date) - MIN(date)` is a clean way to compute activity span * `HAVING` is essential when filtering aggregated results * Type casting (`::date`) ensures correct subtraction behavior * Always think: filter → group → aggregate → filter again Shoutout to Nick Singh 📕🐒 for creating and sharing such practical SQL problems 🙌 #SQL #PostgreSQL #DataAnalytics #BackendDevelopment #DataEngineering #InterviewPrep
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development