SQL Progress: Logic & CASE Statements! Today I solved another Medium challenge on LeetCode. This problem was a great lesson in how to calculate percentages and rates directly in SQL. What I learned today: 1. AVG with CASE WHEN: I learned that I can use AVG(CASE WHEN condition THEN 1.0 ELSE 0.0 END) to calculate a rate. It’s a very clear way. 2. Handling NULLs in Rates: By using a LEFT JOIN between the Signups and Confirmations tables, I ensured that users with no actions are still included, and the AVG function automatically treats them as 0 if they don't meet the "confirmed" criteria. 3. Precision with ROUND: Used ROUND(..., 2) to make sure the final confirmation rate is clean and meets the required format(0.00). I would love to learn from your experience: is ther another methods cleaner? قليل مستمر خير من كثير منقطع #SQL #DataEngineering #PostgreSQL #LeetCode #100DaysOfCode #DataAnalytics #ProblemSolving
SQL Progress: AVG with CASE Statements and NULL Handling
More Relevant Posts
-
🤯 𝐒𝐭𝐫𝐮𝐠𝐠𝐥𝐢𝐧𝐠 𝐭𝐨 𝐜𝐡𝐨𝐨𝐬𝐞 𝐛𝐞𝐭𝐰𝐞𝐞𝐧 𝐂𝐓𝐄𝐬 𝐚𝐧𝐝 𝐒𝐮𝐛𝐪𝐮𝐞𝐫𝐢𝐞𝐬 𝐢𝐧 𝐒𝐐𝐋? Not every SQL problem is hard… sometimes it’s just about choosing the better way to write it. Here’s a simple breakdown 👇 🔹 𝐂𝐓𝐄𝐬 (𝐖𝐈𝐓𝐇 𝐜𝐥𝐚𝐮𝐬𝐞) • Defined before the main query • Makes complex queries more structured • Improves readability and debugging • Can be reused within the same query • Supports recursive logic 🔹 𝐒𝐮𝐛𝐪𝐮𝐞𝐫𝐢𝐞𝐬 • Written inside another query • Useful for quick filtering and conditions • Can become harder to read when nested • Limited reusability 📊 𝐖𝐡𝐚𝐭 𝐭𝐡𝐢𝐬 𝐦𝐞𝐚𝐧𝐬 𝐢𝐧 𝐩𝐫𝐚𝐜𝐭𝐢𝐜𝐞: • Both approaches can solve the same problem • The difference comes in clarity and maintainability • As queries grow, structure starts to matter more 💡 𝐒𝐢𝐦𝐩𝐥𝐞 𝐰𝐚𝐲 𝐭𝐨 𝐝𝐞𝐜𝐢𝐝𝐞: • Small & straightforward task → Subquery • Multi-step or complex logic → CTE 💬𝐖𝐡𝐚𝐭 𝐝𝐨 𝐲𝐨𝐮 𝐮𝐬𝐮𝐚𝐥𝐥𝐲 𝐩𝐫𝐞𝐟𝐞𝐫 𝐰𝐡𝐢𝐥𝐞 𝐰𝐫𝐢𝐭𝐢𝐧𝐠 𝐒𝐐𝐋? #SQL #DataAnalytics #DataAnalyst #PostgreSQL #LearningInPublic #SQLTips #CTE #Subquery
To view or add a comment, sign in
-
-
📊 Recently, I took some time to revisit core SQL concepts while practicing query-based problems on LeetCode. It was a great way to strengthen fundamentals like: • JOINs and subqueries • GROUP BY & aggregation • Window functions • Writing optimized queries for real-world scenarios Along with problem-solving, I also revised important database concepts such as ACID properties and Normalization, which are essential for designing reliable and scalable systems. To keep everything in one place, I’ve created a concise PDF that includes: ✔ Common SQL queries (interview-focused) ✔ Clear explanation of ACID properties ✔ Normalization concepts with examples Sharing it here in case it helps others who are preparing for SQL interviews or brushing up their basics. #SQL #LeetCode #Database #BackendDevelopment #InterviewPreparation #LearningJourney
To view or add a comment, sign in
-
SQL Progress: self join, numeric, avg, group by! Today I solved a very interesting challenge on LeetCode: "Average Time of Process per Machine". It was a great exercise for using Joins and Math functions together. What I learned today: 1. Self-Join: I learned how to join the same table with itself (Activity a1, Activity a2). This is the best way to compare two different rows—like a "start" and an "end" timestamp—for the same process. 2. Aggregate Functions: I used AVG() to find the average time and ROUND(..., 3) to keep the results clean and precise as required. 3. PostgreSQL Specifics: I learned that using ::numeric is important for the ROUND function to work correctly in PostgreSQL. 4. Grouping: Used GROUP BY machine_id to make sure the calculation is done for each machine separately. I’ve attached my solution below. Question for the experts: Is there a way to solve this without using a Join? I’d love to hear your thoughts! قليل مستمر خير من كثير منقطع #SQL #DataEngineering #PostgreSQL #LeetCode #100DaysOfCode #ProblemSolving
To view or add a comment, sign in
-
-
Just earned the SQL 50 badge on LeetCode 🎯 Honestly, if you’re starting with SQL - this is all you need. These 50 problems cover almost every important concept, from basics to advanced queries. Big takeaway? 👉 Window Functions are a game changer. Once you get them, a lot of “complex” problems become straightforward. If you’re preparing for interviews or strengthening your backend/data skills, I highly recommend going through this set. 📌 Want MySQL notes covering everything from basic to advanced? Check out my repository: https://lnkd.in/g8CTyMKC Consistency > Everything. #SQL #LeetCode #DataStructures #CodingJourney #BackendDevelopment #Learning #Tech
To view or add a comment, sign in
-
-
**Day 8 of my 30 Days SQL Series 🚀** Today’s question was “Confirmation Rate” from LeetCode. At first, I thought it’s just a simple confirmed / total calculation. But when I actually started solving it, I got stuck in the logic part. --- ### 💡 How I understood the problem: For each user: 👉 how many confirmation requests they received 👉 and out of those, how many they actually confirmed --- ### 😵💫 Where I got stuck: My first instinct was to use: `WHERE action = 'confirmed'` But then I realized: 👉 this would remove all “timeout” rows 👉 and my total count would become wrong That’s when I understood that **sometimes you shouldn’t filter rows, but control the calculation instead** --- ### ⚙️ What I did next: 👉 Used **LEFT JOIN** to make sure even users with no confirmation requests are included (with 0) 👉 Used **CASE WHEN inside COUNT** to count only “confirmed” actions without removing other rows 👉 Used **COUNT(*)** to get total attempts 👉 And finally **IFNULL + ROUND** to handle null values and format the output --- ### 🧠 What I learned today: * Don’t blindly use WHERE — think about what data you might lose * CASE WHEN helps in **conditional counting** * Got a clearer idea of when to use LEFT JOIN --- ### 📊 Simple example: If a user had 2 requests: * 1 confirmed * 1 timeout 👉 rate = 1/2 = 0.5 If a user had no requests at all: 👉 rate = 0 (not NULL) --- Today wasn’t about writing the query fast, it was more about understanding the logic properly. Learning slowly, but it’s making more sense now 💪 #Day8 #SQL #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 36 of My SQL Learning Journey Today I worked on a challenging SQL problem involving consecutive records and learned an important lesson along the way 🔥 🔹 Problem: Find records where people count is ≥ 100 for at least 3 consecutive entries 🔗 Problem Link: https://lnkd.in/gNBER5CQ 🔹 Final Solution: WITH temp AS ( SELECT id, visit_date, people, id - ROW_NUMBER() OVER (ORDER BY id) AS grp FROM Stadium WHERE people >= 100 ) SELECT id, visit_date, people FROM temp WHERE grp IN ( SELECT grp FROM temp GROUP BY grp HAVING COUNT(*) >= 3 ); 🔹 Key Learning 💡: The approach was to use ID-based grouping, because the problem depends on consecutive entries (IDs), not consecutive dates. 🔹 Concepts Used: ROW_NUMBER() for sequence handling Grouping consecutive records Window functions + aggregation 💡 Debugging mistakes helped me understand the problem more deeply than just solving it! Consistency continues 🚀 #SQL #LeetCode #WindowFunctions #ProblemSolving #90DaysOfCode #CodingJourney
To view or add a comment, sign in
-
-
🚀 Day 51 of My SQL Learning Journey Today I solved a SQL problem involving joins and conditional filtering 🔥 🔹 Problem: Find employees whose bonus is less than 1000 or who didn’t receive any bonus 🔗 Problem Link: https://lnkd.in/gF6_J-Vr 🔹 Solution: SELECT e.name, b.bonus FROM Employee e LEFT JOIN Bonus b ON e.empId = b.empId WHERE b.bonus < 1000 OR b.bonus IS NULL; 🔹 Key Learning: Using LEFT JOIN to include unmatched records Handling NULL values properly Applying conditions with multiple filters 💡 Understanding joins is essential for solving real-world database problems! Consistency continues 🚀 #SQL #LeetCode #90DaysOfCode #CodingJourney #ProblemSolving
To view or add a comment, sign in
-
-
🚀 Day 19/30 of my SQL Problem Solving Challenge 💻 Problem Statement: Find total distance traveled by each user and sort them by highest distance. If distances are equal, sort by name. 🧠 Approach: Used LEFT JOIN to include all users, SUM() with GROUP BY to calculate total distance, and ORDER BY for sorting. Also handled NULL values using COALESCE. ✨ Key Learning: Break the problem into steps, join data, group it, apply aggregation, then sort. Also learned a new function COALESCE, which replaces NULL values by returning the first non-null value (used it here to convert NULL distance into 0). #SQL #30DaysOfSQL #MYSQL #CodingJourney #SDE #ProblemSolving #Streak #DailyLearning
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 47 of my SQL Journey 💪 Learning doesn’t always move in a straight line… sometimes it spirals 🔄 Today’s problem was about identifying students who follow a study spiral pattern — studying multiple subjects in a structured, repeating cycle. The approach: • Tracked session order using ROW_NUMBER() • Measured gaps between sessions with LAG() and DATEDIFF() • Filtered sequences with gaps longer than 2 days • Detected repeating cycles using MOD() on row position • Counted students with at least 3 subjects across multiple cycles What I practised: • Window functions for sequence tracking • Time gap detection using date functions • Sequential pattern recognition • Using HAVING for conditional aggregation What stood out — A single session tells you nothing… A sequence tells you everything. Patterns don’t announce themselves, They hide in the order of events. That’s where the real insight lies. SQL doesn’t just query data. It helps read the story behind it. Consistent learning, one query at a time 🚀 #SQL #LeetCode #DataAnalytics #LearningInPublic #SQLPractice
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 48 of my SQL Journey 💪 Not all active users are valuable… Some look busy 👀 Today’s problem was about identifying zombie sessions — users who scroll endlessly but never click, never buy, never engage. Active on paper. Dead in reality. The approach: • Calculated session duration using MIN() and MAX() timestamps • Counted scrolls and clicks with conditional aggregation • Computed click-to-scroll ratio inside HAVING • Filtered sessions: duration > 30 min, scrolls ≥ 5, ratio < 0.20, zero purchases What I practised: • Conditional aggregation using CASE WHEN • Session-level grouping • Ratio logic inside HAVING • Translating behaviour into SQL filters What stood out — Scrolling is passive. Clicking is intent. Buying is a commitment. A session full of scrolls but no clicks It's just a ghost. That’s where the real insight lies. SQL doesn’t just count events. It reads the story behind them. Consistent learning, one query at a time 🚀 #SQL #LeetCode #DataAnalytics #LearningInPublic #SQLPractice
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development