🚀 Day 41 of My SQL Learning Journey Today I solved a SQL problem involving self joins and missing relationships 🔥 🔹 Problem: Find employees whose managers are no longer in the company 🔗 Problem Link: https://lnkd.in/gwXAzFJm 🔹 Solution: SELECT e1.employee_id FROM Employees e1 LEFT JOIN Employees e2 ON e1.manager_id = e2.employee_id WHERE e1.manager_id IS NOT NULL AND e2.employee_id IS NULL; 🔹 Key Learning: Using self joins to compare records Handling missing relationships using LEFT JOIN Detecting NULL values effectively 💡 SQL helps uncover hidden data relationships! Consistency continues 🚀 #SQL #LeetCode #90DaysOfCode #CodingJourney #ProblemSolving
SQL self join to find employees without managers
More Relevant Posts
-
✅ Solved a SQL problem on LeetCode — Day 49 of my SQL Journey 💪 Consistency isn’t always visible… But patterns reveal it over time 📈 Today’s problem was about identifying users with persistent behaviour — not just active users, but those who show the same action consistently across days. I worked on analysing behaviour patterns to: • Track user actions across consecutive days • Group continuous activity using ROW_NUMBER() logic • Identify streaks by adjusting date gaps • Filter users with meaningful streak length (≥ 5 days) • Select the strongest pattern per user using RANK() What I practised: • Window functions for streak detection • Handling sequential data with date logic • GROUP BY + HAVING for filtering patterns • Translating consistency into measurable conditions What stood out — Activity can be random… Consistency is never random. When actions repeat over time, They stop being noise and become behaviour. That’s where real insights start. SQL doesn’t just analyse data. It uncovers patterns hidden in time. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 40 of My SQL Learning Journey Today I worked on a SQL problem involving aggregation and string manipulation 🔥 🔹 Problem: Group sold products by date and list them in sorted order 🔗 Problem Link: https://lnkd.in/gp6qVmNt 🔹 Solution: SELECT sell_date, COUNT(DISTINCT product) AS num_sold, GROUP_CONCAT(DISTINCT product ORDER BY product SEPARATOR ',') AS products FROM Activities GROUP BY sell_date; 🔹 Key Learning: Using COUNT(DISTINCT) for unique values Combining rows using GROUP_CONCAT() Sorting values inside aggregation 💡 SQL can generate clean reports directly from raw data! Consistency continues 🚀 #SQL #LeetCode #90DaysOfCode #DataAnalytics #CodingJourney
To view or add a comment, sign in
-
-
Day 1 of my 30 Days SQL Challenge 🚀 Finally starting something I was planning for a long time… Today I solved 2 SQL problems and focused more on understanding the logic rather than just writing queries 👇 🔹 **Problem 1: Recyclable & Low Fat Products** 👉 Task: Find product IDs that are both low fat and recyclable 💡 **What I learned:** * How to use multiple conditions together using `AND` * Importance of filtering data correctly based on requirements * Even a simple query like this builds the foundation for complex data filtering in real-world datasets --- 🔹 **Problem 2: Find Customer Referee** 👉 Task: Find customers who are not referred by id = 2 OR have no referee 💡 **What I learned:** * Handling `NULL` values using `IS NULL` (because `=` doesn’t work with NULL) * Difference between `!=` and `IS NULL` conditions * How to combine multiple conditions using `OR` * Realized that missing NULL condition can give wrong results (very common mistake!) --- ✨ **Key takeaway from Day 1:** Today made me realize how easy it is to miss small conditions in SQL, and how much they actually affect the final result. Consistency > Perfection 🚀 #Day1 #SQLChallenge #LearningInPublic #SQL #DataAnalytics #CodingJourney #WomenInTech #100DaysOfCode #CareerGrowth
To view or add a comment, sign in
-
-
🚀 Day 44 of My SQL Learning Journey Today I solved a SQL problem involving aggregation and distinct counting 🔥 🔹 Problem: Find number of unique leads and partners per day 🔗 Problem Link: https://lnkd.in/gXnubrbd 🔹 Solution: SELECT date_id, make_name, COUNT(DISTINCT lead_id) AS unique_leads, COUNT(DISTINCT partner_id) AS unique_partners FROM DailySales GROUP BY date_id, make_name; 🔹 Key Learning: Using multiple COUNT(DISTINCT) Grouping by multiple columns Real-world reporting queries 💡 SQL helps transform raw data into meaningful insights! Consistency continues 🚀 #SQL #LeetCode #90DaysOfCode #CodingJourney #DataAnalytics
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 39 of my SQL Journey 💪 Some customers order anytime… But some follow patterns you can actually measure 📊 Today’s problem: identify “Golden Hour Customers” — people who consistently order during peak hours and leave high ratings. I used a single-pass aggregation query to: • Count total orders per customer • Flag orders placed during peak hours ⏰ • Calculate peak-hour percentage • Compute average rating while handling NULLs • Filter customers across all conditions at once What I practised: • CASE WHEN inside aggregations • Applying multiple conditions with HAVING • Time-based filtering using HOUR() • Combining behavioural and quality metrics in one query What stood out — Good customers aren’t just active… they’re consistent in when and how they engage. That’s where the real insight lies. SQL isn’t just about breaking problems into steps. Sometimes it’s about bringing everything together — cleanly. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 40 of my SQL Journey 💪 Not every customer leaves suddenly… Some show signs before they churn. ⚠️ Today’s problem was about identifying churn risk customers — users who are still active but showing warning behaviour. I used aggregation and event analysis to: • Track each user’s latest subscription status • Identify downgrade behaviour over time • Compare current spend with historical maximum • Measure total subscription duration • Filter users based on combined risk signals What I practised: • Working with event-based data using GROUP BY • Using CASE WHEN to capture behavioural signals • Extracting latest values with ordered aggregation • Applying multiple conditions to detect patterns What stood out — Churn doesn’t happen instantly… it builds up through small changes. A downgrade here, a drop in spending there. That’s where the real insight lies. SQL isn’t just about analysing what happened. It’s about spotting what might happen next. Consistent learning, one query at a time 🚀 #SQL #LeetCode #SQLPractice #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 46 of my SQL Journey 💪 Customers don’t just buy products… They buy patterns together 🛒 Today’s problem was about finding product categories that are frequently bought together — The logic behind “you might also like.” The approach: • Mapped each user to their purchased categories • Used a self-join to create category pairs • Applied category1 < category2 to avoid duplicates • Counted shared users per pair and filtered meaningful ones What I practised: • Self joins to relate data within the same dataset • Using DISTINCT for accurate counting • Applying GROUP BY with HAVING • Translating real behaviour into SQL logic What stood out — Recommendations aren’t random… They’re built on repeated patterns. When enough users repeat the same combination, It becomes a signal. That’s where the real insight lies. SQL doesn’t just store transactions. It uncovers relationships between them. Consistent learning, one query at a time 🚀 #SQL #LeetCode #DataAnalytics #LearningInPublic #SQLPractice
To view or add a comment, sign in
-
-
🚀 Day 42/100 – LeetCode SQL Practice ✅ Problem Solved: Average Time of Process per Machine 💡 My Approach: Each process has a start and end timestamp I joined the table with itself to match: 👉 start row with corresponding end row (same machine_id & process_id) Then calculated: 👉 Time = end - start Finally: 👉 Took average time per machine using AVG() 👉 Used ROUND(..., 3) to format output to 3 decimal places 🧠 What I Learned Today: How to use SELF JOIN in SQL Understanding how to pair related rows (start & end) Using aggregate functions like AVG() Importance of data formatting using ROUND Writing clean queries with GROUP BY 📈 Key Takeaway: “Break the problem → pair related data → compute → aggregate.” #Day42 #100DaysOfCode #LeetCodeSQL #SQL #DataAnalytics #CodingJourney #ProblemSolving
To view or add a comment, sign in
-
-
✅ Solved a SQL problem on LeetCode — Day 48 of my SQL Journey 💪 Not all active users are valuable… Some look busy 👀 Today’s problem was about identifying zombie sessions — users who scroll endlessly but never click, never buy, never engage. Active on paper. Dead in reality. The approach: • Calculated session duration using MIN() and MAX() timestamps • Counted scrolls and clicks with conditional aggregation • Computed click-to-scroll ratio inside HAVING • Filtered sessions: duration > 30 min, scrolls ≥ 5, ratio < 0.20, zero purchases What I practised: • Conditional aggregation using CASE WHEN • Session-level grouping • Ratio logic inside HAVING • Translating behaviour into SQL filters What stood out — Scrolling is passive. Clicking is intent. Buying is a commitment. A session full of scrolls but no clicks It's just a ghost. That’s where the real insight lies. SQL doesn’t just count events. It reads the story behind them. Consistent learning, one query at a time 🚀 #SQL #LeetCode #DataAnalytics #LearningInPublic #SQLPractice
To view or add a comment, sign in
-
-
🚀 Day 34/100 – SQL Practice (LeetCode) Today I solved the “Daily Leads and Partners” problem. 🔹 Approach: - Identified that the dataset contains duplicate records - Goal was to calculate unique leads and unique partners for each date_id and make_name - Used GROUP BY to organize data and COUNT(DISTINCT ...) to count only unique values 🔹 SQL Solution: SELECT date_id, make_name, COUNT(DISTINCT lead_id) AS unique_leads, COUNT(DISTINCT partner_id) AS unique_partners FROM DailySales GROUP BY date_id, make_name; 🔹 What I Learned: ✔️ Difference between DISTINCT and COUNT(DISTINCT) ✔️ Handling duplicate data efficiently ✔️ Importance of grouping before aggregation ✔️ Writing cleaner and optimized SQL queries 💡 Key Takeaway: 👉 When dealing with duplicates and you need unique counts, COUNT(DISTINCT column) is your best friend. Consistency + Practice = Growth 📈 #Day34 #100DaysOfCode #SQL #LeetCode #DataAnalytics #LearningJourney
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development