✅ Solved a SQL problem on LeetCode — Day 42 of my SQL Journey 💪 Reactions look random… but behaviour usually isn’t 🎯 Today’s problem was about identifying emotionally consistent users — people who tend to react the same way across different content, not just react more. The approach: • Counted total reactions per user and per reaction type • Used a window function inside aggregation to get totals in one pass • Calculated the ratio of the dominant reaction to total reactions • Filtered users where the ratio crossed 60% • Used RANK() to identify the most frequent reaction per user What I practised: • Using window functions inside aggregations • Applying RANK() to capture dominant behaviour • Ratio-based filtering instead of just counts • Translating consistency into a measurable SQL condition What stood out — Consistency isn’t about reacting more… it’s about reacting the same way, repeatedly. Once you frame it as a ratio problem, the pattern becomes clearly visible. That’s where the real insight lies. Also noticed — SQL syntax isn’t universal. Things like ::numeric or QUALIFY don’t work in MySQL. Small difference, but important to keep in mind. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #WindowFunctions #LearningInPublic
Solved SQL problem on LeetCode using window functions and ratio-based filtering
More Relevant Posts
-
✅ Solved a SQL problem on LeetCode — Day 49 of my SQL Journey 💪 Consistency isn’t always visible… But patterns reveal it over time 📈 Today’s problem was about identifying users with persistent behaviour — not just active users, but those who show the same action consistently across days. I worked on analysing behaviour patterns to: • Track user actions across consecutive days • Group continuous activity using ROW_NUMBER() logic • Identify streaks by adjusting date gaps • Filter users with meaningful streak length (≥ 5 days) • Select the strongest pattern per user using RANK() What I practised: • Window functions for streak detection • Handling sequential data with date logic • GROUP BY + HAVING for filtering patterns • Translating consistency into measurable conditions What stood out — Activity can be random… Consistency is never random. When actions repeat over time, They stop being noise and become behaviour. That’s where real insights start. SQL doesn’t just analyse data. It uncovers patterns hidden in time. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 38 of my SQL Journey 💪 Most queries give you answers… But some force you to define the question first.❓ Today’s problem was about identifying loyal customers — not just frequent buyers, but those who show consistent behaviour over time. I worked on analysing customer patterns to: • Ensure at least 3 meaningful purchase actions • Check long-term engagement (30+ days activity) • Filter out unstable behaviour through the refund ratio • Combine all conditions cleanly using GROUP BY + HAVING What stood out to me — loyalty isn’t a single metric. A customer can purchase often, yet still not be reliable. Because behaviour is not about volume… it’s about consistency. That’s where the real insight lies. Also noticed something important — boolean logic inside aggregations (like SUM(condition)) works smoothly in MySQL, but small differences like integer division can break results in other SQL environments. Tiny detail. Big impact. SQL isn’t just about querying data. It’s about defining reality through logic. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
Hello Everyone, At first, SQL felt simple—just SELECT, WHERE, GROUP BY… But then I hit a wall: 👉 What if the logic itself depends on another query? That’s when I discovered advanced SQL concepts—and everything changed. In this part, I explored: 🔥 Subqueries → Query inside a query (mind = blown 🤯) 🔥 CTEs (WITH clause) → Cleaner, more readable logic 🔥 Views → Save and reuse complex queries like tables 🔥 Breaking complex problems into smaller, manageable steps The biggest shift for me: 👉 Good analysts don’t write complex queries… they write clear ones. Now SQL feels less like coding… and more like structured thinking 🧠 💬 What confused you more—Subqueries or CTEs? #PostgreSQL #SQL #DataAnalytics #DataAnalysis #BusinessIntelligence #LearningJourney #Upskilling #DataScience #CareerGrowth #TechLearning
To view or add a comment, sign in
-
SQL Optimization isn't about writing less code. It's about understanding what happens AFTER you hit run. Most engineers I know can write SQL. Very few understand what it costs. Here's everything that actually matters: 1. The Query Optimizer isn't magic It builds an execution plan based on statistics. Old or missing statistics = bad plan = slow query. Update your stats. Trust the plan less. 2. SARGability is everything SARG = Search ARGument Able. If your filter can't use an index, it scans the whole table. This breaks SARGability: WHERE YEAR(created_at) = 2024 This doesn't: WHERE created_at >= '2024-01-01' AND created_at < '2025-01-01' Same result. Completely different cost. 3. Implicit conversions are silent killers ISNULL(Amount, 0) when Amount is decimal? The engine converts everything to int quietly. Your index? Ignored. 4. Execution Plans > Gut Feeling Before optimizing anything read the plan. Look for: Table Scans, Key Lookups, Sort operators. These are your cost red flags. 5. Indexes aren't free Every index you add speeds up reads. But slows down writes. Design for your actual workload. The real lesson? Writing SQL is a skill. Understanding SQL cost is a discipline. One gets the query working. The other keeps the system alive at 3AM. Which of these did nobody teach you formally?👇 Found Insightful? ♻️ Repost in your network and follow Sahil Alam for more. #SQL #DataEngineering #Analytics #Debugging #DataQuality #Learning
To view or add a comment, sign in
-
You do not need to memorise every SQL function. You need to know where to find them when you need them. 📌 So I built a complete SQL reference guide 115 functions and clauses across 10 categories, every one with syntax, plain English explanation and a real copy-paste example. Here is everything inside 👇 📐Data Retrieval: SELECT, DISTINCT, ORDER BY, LIMIT, UNION, INTERSECT 🔗Joins: INNER, LEFT, RIGHT, FULL OUTER, CROSS, SELF JOIN 📊Aggregation: COUNT, SUM, AVG, GROUP BY, HAVING, ROLLUP 🔀Filtering and Logic: IN, BETWEEN, LIKE, EXISTS, CASE WHEN, NULLIF 📝String Functions: CONCAT, TRIM, SUBSTRING, REPLACE, STRING_AGG 📅Date and Time: YEAR, MONTH, DATE_TRUNC, DATEDIFF, EOMONTH 🔢Math and Numeric: ROUND, FLOOR, CAST, NULLIF, GREATEST, LEAST ⚡Window Functions: ROW_NUMBER, RANK, LAG, LEAD PERCENT_RANK 🏗️Table Operations: CREATE, INSERT, UPDATE, DELETE, ALTER, INDEX, VIEW 🧠 Advanced SQL: CTEs, Subqueries, PIVOT, EXPLAIN, Recursive CTEs 115 functions. 10 categories. Real examples for every single one. 🎯 Which SQL function do you use most in your daily work? 👇 #SQL #DataAnalyst #SQLReference #DataAnalytics #SQLServer #MySQL #PostgreSQL #DataScience #Analytics #FreeResource #SaveThis #BusinessIntelligence #DataDriven #CareerGrowth #Upskilling #TechSkills #DataEngineering #WindowFunctions #DataCommunity #DataVisualization
To view or add a comment, sign in
-
SQL is not code. It's a conversation with your data. The moment this clicked for me — everything got easier. Instead of staring at syntax, I started writing in plain English first: → "Show me all sales above ₹10,000 last month" = SELECT with WHERE + date filter → "Which city had the highest return rate?" = GROUP BY + ORDER BY + LIMIT 1 → "Find customers who bought more than twice" = HAVING COUNT(*) > 1 The logic was already there in the question. SQL is just the translation. If you're starting out — stop memorising syntax. Start asking your data questions in plain language. The clauses will start making sense on their own. What was YOUR SQL lightbulb moment? Drop it below 💡 #SQL #DataAnalysis #LearnSQL #DataSkills #DataAnalytics
To view or add a comment, sign in
-
✅ Solved a SQL problem on LeetCode — Day 48 of my SQL Journey 💪 Not all active users are valuable… Some look busy 👀 Today’s problem was about identifying zombie sessions — users who scroll endlessly but never click, never buy, never engage. Active on paper. Dead in reality. The approach: • Calculated session duration using MIN() and MAX() timestamps • Counted scrolls and clicks with conditional aggregation • Computed click-to-scroll ratio inside HAVING • Filtered sessions: duration > 30 min, scrolls ≥ 5, ratio < 0.20, zero purchases What I practised: • Conditional aggregation using CASE WHEN • Session-level grouping • Ratio logic inside HAVING • Translating behaviour into SQL filters What stood out — Scrolling is passive. Clicking is intent. Buying is a commitment. A session full of scrolls but no clicks It's just a ghost. That’s where the real insight lies. SQL doesn’t just count events. It reads the story behind them. Consistent learning, one query at a time 🚀 #SQL #LeetCode #DataAnalytics #LearningInPublic #SQLPractice
To view or add a comment, sign in
-
-
Most SQL beginners write queries like this: SELECT * FROM ( SELECT user_id, SUM(revenue) AS total FROM orders GROUP BY 1 ) t WHERE t.total > 1000; It works. But it's hard to read. Here's the same query using a CTE: WITH user_revenue AS ( SELECT user_id, SUM(revenue) AS total FROM orders GROUP BY 1 ) SELECT * FROM user_revenue WHERE total > 1000; Same result. Way easier to read. So what IS a CTE? Think of it like giving your subquery a name and moving it to the top. That's it. Why? → Your query reads top to bottom like a story → Each step has a clear, meaningful name → You can chain multiple CTEs together → Debugging becomes much easier Bonus: Recursive CTEs let you walk through hierarchical data — like org charts or folder trees — in pure SQL. If nested subqueries are giving you headaches, try CTEs. You won't go back. #SQL #DataAnalytics #DataEngineering #LearningInPublic
To view or add a comment, sign in
-
-
💡 𝐂𝐓𝐄 𝐦𝐚𝐝𝐞 𝐦𝐲 𝐒𝐐𝐋 𝟏𝟎𝐱 𝐜𝐥𝐞𝐚𝐧𝐞𝐫 Earlier, my SQL queries looked like this: • Nested queries inside nested queries 😵💫 • Hard to read • Even harder to debug Then I discovered CTE (Common Table Expressions)… and everything changed. 👉 Instead of writing one giant messy query, I started breaking it into steps. 𝐋𝐢𝐤𝐞 𝐭𝐡𝐢𝐬: • Step 1: Get base data • Step 2: Apply filters • Step 3: Do aggregations • Step 4: Final output All in a clean, readable flow. 💻 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐦𝐢𝐧𝐝𝐬𝐞𝐭: WITH step1 AS (...), step2 AS (...), step3 AS (...) SELECT * FROM step3; ✨ 𝐖𝐡𝐚𝐭 𝐈 𝐥𝐨𝐯𝐞 𝐚𝐛𝐨𝐮𝐭 𝐂𝐓𝐄: • Makes queries readable • Easy to debug step-by-step • Reusable logic in the same query • Feels like writing code, not chaos 💡 Write SQL in a way that anyone can easily understand. #SQL #DataAnalytics #CTE #DataAnalyst #LearningSQL #TechGrowth 🚀
To view or add a comment, sign in
-
-
**Day 8 of my 30 Days SQL Series 🚀** Today’s question was “Confirmation Rate” from LeetCode. At first, I thought it’s just a simple confirmed / total calculation. But when I actually started solving it, I got stuck in the logic part. --- ### 💡 How I understood the problem: For each user: 👉 how many confirmation requests they received 👉 and out of those, how many they actually confirmed --- ### 😵💫 Where I got stuck: My first instinct was to use: `WHERE action = 'confirmed'` But then I realized: 👉 this would remove all “timeout” rows 👉 and my total count would become wrong That’s when I understood that **sometimes you shouldn’t filter rows, but control the calculation instead** --- ### ⚙️ What I did next: 👉 Used **LEFT JOIN** to make sure even users with no confirmation requests are included (with 0) 👉 Used **CASE WHEN inside COUNT** to count only “confirmed” actions without removing other rows 👉 Used **COUNT(*)** to get total attempts 👉 And finally **IFNULL + ROUND** to handle null values and format the output --- ### 🧠 What I learned today: * Don’t blindly use WHERE — think about what data you might lose * CASE WHEN helps in **conditional counting** * Got a clearer idea of when to use LEFT JOIN --- ### 📊 Simple example: If a user had 2 requests: * 1 confirmed * 1 timeout 👉 rate = 1/2 = 0.5 If a user had no requests at all: 👉 rate = 0 (not NULL) --- Today wasn’t about writing the query fast, it was more about understanding the logic properly. Learning slowly, but it’s making more sense now 💪 #Day8 #SQL #LearningInPublic
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development