Day 10/50 – #SQLChallenge 🚀 Solved “Managers with at Least 5 Direct Reports” problem on LeetCode. ✅ Approach: Used subquery with GROUP BY and HAVING ✅ Key Concept: Aggregating data to filter groups based on count 💡 Advanced Insight: The HAVING clause is applied after grouping, making it ideal for filtering aggregated results (like COUNT). This is different from WHERE, which filters rows before grouping. Understanding this distinction is key when working with real-world data. 🔍 Takeaway: Combining subqueries with aggregation helps solve hierarchical data problems like identifying managers and their reporting structure. 10 days of consistency — building strong fundamentals 💪 #SQL #LeetCode #Database #CodingChallenge #ProblemSolving #LearningInPublic #DeveloperJourney
Solved LeetCode's Managers with at Least 5 Direct Reports problem using SQL
More Relevant Posts
-
❄️ LeetCode Daily Challenge 📅 Day 29 of 50 Days SQL Challenge Today’s challenge was a perfect blend of time-based filtering, aggregation, and business logic — exactly what we deal with in real-world data scenarios. 📌 Problem: Find Golden Hour Customers 🔗Problem Link: https: https://lnkd.in/gPyRUtSG 💡 Problem Breakdown: Identify golden hour customers who: ✔ Placed at least 3 orders ✔ ≥ 60% of orders during peak hours (11:00-14:00 or 18:00-21:00) ✔ Have an average rating ≥ 4.0 29 days of consistent SQL practice completed ✅ Daily practice is turning concepts into intuition 💪 Let’s grow one query at a time 🚀 Drop your approach below 👇 #LeetCode #SQL #DataEngineering #Analytics #CustomerInsights #SQLPractice #WindowFunctions #LearningInPublic #50DaysChallenge #DataAnalytics
To view or add a comment, sign in
-
-
🚀 Day 30/100 – LeetCode SQL Challenge ✅ Problem Solved: Reformat Department Table Today’s problem focused on transforming row-based data into a structured column format — a common real-world data handling task. 🔍 What I Learned: How to use CASE WHEN in SQL for conditional aggregation Converting rows into columns (pivoting data) Importance of GROUP BY for summarizing data Handling missing values using NULL 💡 My Approach: Grouped data by department id Used SUM(CASE WHEN month = 'Jan' THEN revenue END) pattern Repeated this logic for all 12 months Ensured each month becomes a separate column 📊 This problem helped me understand how SQL can reshape data effectively for reporting and dashboards. 🔥 Consistency is the key — 30 days done, 70 more to go! #Day30 #100DaysOfCode #LeetCode #SQL #DataAnalytics #CodingJourney #PlacementPreparation
To view or add a comment, sign in
-
-
❄️ LeetCode Daily Challenge 📅 Day 40 of 50 Days SQL Challenge Today’s challenge focused on subscription funnel analysis — a common real-world business analytics use case 📈 📌 Problem: Analyze Subscription Conversion 🔗Problem Link: https://lnkd.in/gSEZWxJb 💡 Problem Breakdown: Identify users who converted from free trial to paid subscription: ✔ Find users who moved from free_trial to paid ✔ Calculate average daily activity duration during free trial ✔ Calculate average daily activity duration during paid period ✔ Round averages to 2 decimal places 40 days of consistent SQL practice completed ✅ Daily practice is turning concepts into intuition 💪 Let’s grow one query at a time 🚀 💬 How would you approach this? Single query with conditional aggregation or separate CTEs? Drop your thoughts below 👇 #LeetCode #SQL #DataEngineering #AzureDataEngineer #ProductAnalytics #SQLPractice #LearningInPublic #50DaysChallenge #DataAnalytics
To view or add a comment, sign in
-
-
☀️Hello mates!! Day 02 solving the SQL 50 LeetCode (02/50) Today's problem is #584: "Find Customer Refree"👨💻 This is an Easy-level problem that focuses on filtering data using SQL condition. I used the WHERE clause, along with proper NULL handling, and it successfully passed all the test case✅️ How my solution Works: ->The WHERE clause is used to select Customer whose refree_id is not equal to 2. ->The OR condition ensure that Customer with no referee (NULL values) are also included in the result. ->Since the problem doesn't require any joins or aggregations, the query stays simple, clean and easy to understand. ->This approach reflect real-world SQL querying practices used in Data Analytics and Data Engineering task. Database Used:MYSQL See you all tomorrow with another exciting LeetCode problem #LeetCode #SQL50 #SQLQuery #DataAnalytics #Database #Coding #SQLPractice #Coding
To view or add a comment, sign in
-
-
❄️ LeetCode Daily Challenge 📅 Day 30 of 50 Days SQL Challenge Today’s challenge was a deep dive into subscription analytics and churn prediction — something very relevant to real-world data scenarios. 📌 Problem: Find Churn Risk Customers 🔗Problem Link: https://lnkd.in/dw95yEPj 💡 Problem Breakdown: Identify churn risk customers who: ✔ Currently have an active subscription (last event is not cancel) ✔ Have at least one downgrade in their history ✔ Current plan revenue < 50% of their historical maximum ✔ Have been subscribed for at least 60 days 30 days of consistent SQL practice completed ✅ Daily practice is turning concepts into intuition 💪 Let’s grow one query at a time 🚀 Drop your approach below 👇 #LeetCode #SQL #DataEngineering #Analytics #ChurnAnalysis #CustomerRetention #SQLPractice #WindowFunctions #LearningInPublic #50DaysChallenge
To view or add a comment, sign in
-
-
❄️ LeetCode Daily Challenge 📅 Day 21 of 50 Days SQL Challenge Continuing my SQL consistency journey with a Medium-level SQL problem focused on joins and conditional aggregation. 📌 Problem : Market Analysis I 🔗 Problem Link : https://lnkd.in/gK6uAEEJ 💡 Problem Summary : For each user, we need to find: ✔ Their join date ✔ Number of orders they made as a buyer in 2019 Important: Include users with 0 orders Count only orders from 2019 21 days of SQL practice done ✅ Consistency is building real confidence now 💪 Let’s grow one query at a time 🚀 #LeetCode #SQL #DataEngineering #Analytics #Database #WindowFunctions #DailyPractice #LearningInPublic #50DaysChallenge
To view or add a comment, sign in
-
-
🧹 Day 14: Clean Data is Happy Data! Today’s SQL session was all about String Manipulation. In the real world, text data is rarely perfect. It has extra spaces, inconsistent casing, or needs to be combined for better reporting. My toolkit for today: UPPER/LOWER: Standardizing categories for consistent filtering. TRIM: Removing those pesky leading and trailing spaces that break queries. CONCAT: Merging columns (like Product + Category) into readable strings. REPLACE: Bulk-updating text patterns (e.g., changing "phone" to "device"). SUBSTRING/LEFT/RIGHT: Extracting specific parts of a string for shorter identifiers. Formatting might seem small, but it’s the difference between a messy database and a professional report! 📊 #SQL #DataCleaning #DataAnalytics #100DaysOfCode #DatabaseManagement #TechSkills
To view or add a comment, sign in
-
-
🚀 Day 86 of My 100 Days Data Analysis Journey. If you only use SQL to query data, you’re barely scratching the surface. There’s a deeper layer most beginners don’t see early enough. SQL isn’t just about pulling data… It’s about designing how data lives. Today’s focus shifted into: Creating structured tables Defining PRIMARY KEYS for uniqueness Linking tables using FOREIGN KEYS Applying constraints to maintain clean, reliable data Because here’s what changes everything: Well-structured data makes analysis easy. Poorly structured data makes even simple queries painful. At this stage, it stops being about syntax… and starts becoming about thinking in systems. That’s the shift. 💡 #DataAnalytics #SQL #DatabaseDesign #LearningInPublic #100DaysOfCode #TechJourney
To view or add a comment, sign in
-
-
I used to handle running totals and rankings by self-joining tables back to themselves. It was messy, the performance was usually terrible, and it made the queries unreadable for anyone else on the team. Then I finally stopped ignoring Window Functions. The transition from "Aggregating/Grouping" to "Windowing" is probably the biggest jump in productivity you can make in SQL. The difference is simple: GROUP BY collapses your data. You lose the individual row details to get the summary. Window Functions keep your data alive. They let you peek at the total, the previous row, or the next row without destroying the granularity of your original table. My daily driver list for pipelines: LAG() / LEAD(): Essential for calculating time-deltas between user events (like session duration). DENSE_RANK(): The only clean way to handle ties when identifying top performers or latest records. SUM() OVER(): The cleanest way to get a running total without a self-join in sight. ROW_NUMBER(): Still the best way to deduplicate data in an ETL pipeline. If you are still struggling with them, don't focus on the syntax. Focus on the Frame. PARTITION BY is just saying: "Reset the calculation here." ORDER BY is just saying: "The order matters for this specific calculation." Once you visualize the "frame" moving across your rows, the mystery disappears. What was the specific problem that finally forced you to learn Window Functions? (For me, it was trying to calculate sessionization on web logs). #DataEngineering #SQL #Analytics #DataPipeline #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 19/50 – LeetCode SQL Challenge (Queries Quality & Percentage) Today’s problem focused on analyzing query performance by calculating quality and identifying poor queries using SQL. 📊 Key Concepts Used: • Aggregation (AVG, COUNT) • CASE WHEN (conditional logic) • Ratio calculation (rating / position) • ROUND() function 💡 Approach: Calculated query quality using AVG(rating / position) Used CASE WHEN to identify poor queries (rating < 3) Computed poor query percentage using average of conditional values Rounded both metrics to 2 decimal places for better readability 👉 Learned how to convert conditions into numerical values (0/1) and use AVG to directly calculate percentages. 🧠 Key Learning: AVG of 0/1 can be used to calculate percentage efficiently Combining aggregation with conditional logic simplifies complex problems Clean formatting (ROUND) makes results more professional 📈 Consistency in practice is building stronger SQL fundamentals every day #SQL #DataAnalytics #DataAnalyst #LearningInPublic #30DaysChallenge #SQLPractice #Analytics #CareerGrowth #Consistency #DataScience
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development