Day 13: 90-Day Coding Challenge 🚀 Today I worked on a classic SQL problem — identifying users who logged in for N consecutive days. At first glance, this looks like a simple aggregation problem, but the real challenge is detecting continuous sequences of dates without gaps. 🔍 Approach I used: • Leveraged window functions like ROW_NUMBER() • Created a pattern by subtracting row number from login date to group consecutive days • Aggregated based on this derived key to identify continuous streaks • Filtered users whose streak length ≥ N 💡 Key Insight: Instead of checking each day individually, transforming dates into groups helps detect consecutive patterns efficiently. ⚡ This is a powerful technique often used in: • User retention analysis • Streak tracking (daily active users) • Behavioral analytics Time Complexity: O(n log n) (due to sorting/window functions) Today’s learning highlights: ✅ Mastered handling consecutive patterns in SQL ✅ Practiced window functions for real-world scenarios ✅ Improved thinking around sequence detection ✅ Strengthened SQL problem-solving skills These kinds of problems really show how SQL can go beyond simple queries into analytical problem solving 🔥 Excited for Day 14! #90DaysOfCode #SQL #WindowFunctions #DataEngineering #Analytics #ProblemSolving #CodingJourney
SQL Consecutive Days Problem Solving with Window Functions
More Relevant Posts
-
Small wins don’t look small when you know the effort behind them. Just hit SQL 50 on LeetCode — not a huge milestone on paper, but it forced me to slow down, think clearly, and actually understand what I was writing instead of just “making queries work.” What changed for me wasn’t just solving problems — it was: • thinking in joins instead of steps • debugging logic instead of syntax • realizing how easy it is to get almost correct answers Still a long way to go, but this feels like a solid step in the right direction. On to the next 50.... #SQL #LeetCode #DataStructures #ProblemSolving #LearningJourney #Consistency #TechGrowth #SoftwareEngineering #CodingLife #DeveloperMindset #KeepBuilding #100DaysOfCode #DataAnalytics #BackendDevelopment #QueryOptimization #StudentsInTech #FutureEngineer #GrowthMindset #PracticeMakesProgress #TechCareers
To view or add a comment, sign in
-
-
🚀 From Overthinking to Clean Logic — SQL Growth Moment! Solved “Triangle Judgement” on LeetCode and this one taught me something important 💡 At first, I tried solving it using nested CASE statements and multiple conditions — it worked, but it was unnecessarily complex. Then I realized the problem boils down to a simple mathematical rule 👇 👉 A triangle is valid if: x + y > z x + z > y y + z > x 🧠 Final Clean Approach: SELECT x, y, z, CASE WHEN x + y > z AND x + z > y AND y + z > x THEN 'Yes' ELSE 'No' END AS triangle FROM Triangle; 📊 Result: ✔️ Accepted ✅ (11/11 test cases passed) ✔️ Runtime: 305 ms 🔥 Key Takeaway: Sometimes the best solution isn’t the most complex one — it’s the simplest correct logic. Learning to simplify is just as important as learning to solve 💪 #SQL #LeetCode #CodingJourney #ProblemSolving #Learning #Tech #PlacementPreparation
To view or add a comment, sign in
-
-
🚀 Mastering Data Structures: Linked Lists Deep Dive 🚀 Let's unravel the mystery of Linked Lists! 🧵🔗 In simple terms, a linked list is a linear data structure where each element is a separate object called a node. These nodes are connected using pointers, forming a chain. But why should developers care? 🤔 Well, understanding linked lists is crucial for optimizing memory usage and efficiently managing data, especially when dealing with frequent insertions and deletions. It's a fundamental concept to grasp for mastering more complex data structures. Here's a breakdown to get you started: 1️⃣ Create a Node class with data and a reference to the next node. 2️⃣ Implement methods for inserting, deleting, and traversing nodes. ```python class Node: def __init__(self, data=None): self.data = data self.next = None # Placeholder for code implementation ``` Pro tip: Keep track of the head and tail nodes for faster operations! 🚴 Common mistake alert: Forgetting to update the pointers correctly when inserting or deleting nodes can lead to bugs. 🐞 Double-check your logic! What's your favorite use case for linked lists? Share below! 💬 🌐 View my full portfolio and more dev resources at tharindunipun.lk #DataStructures #LinkedLists #CodingBeginners #DeveloperTips #PythonProgramming #MemoryOptimization #CodeOptimization #CodingJourney #LearnToCode
To view or add a comment, sign in
-
-
🚀 Day 30 – SQL Journey | LeetCode Practice Day Today was all about applying SQL concepts by solving real interview-level problems on LeetCode. Instead of just learning theory, I focused on practical problem-solving, which helped me strengthen both logic building and query optimization skills. 🔹 Problems Solved: 📌 185 – Department Top Three Salaries 👉 Concept: Window Functions (DENSE_RANK) • Finding top salaries within each department 📌 1978 – Employees Whose Manager Left the Company 👉 Concept: Self Join / Filtering • Identifying employees based on missing manager records 📌 602 – Friend Requests II: Who Has the Most Friends 👉 Concept: Aggregation + GROUP BY • Counting and comparing relationships 📌 1341 – Movie Rating 👉 Concept: JOIN + Aggregation • Combining multiple tables and deriving insights 🔹 What I Practiced: • Writing optimized SQL queries • Applying real-world logic • Choosing the right SQL concept for each problem 💡 Insight: Solving problems is where actual learning happens. Each question improves both logic building and query optimization skills. Step by step, getting closer to mastering SQL 🚀 #SQL #DataAnalytics #LearningJourney #SQLPractice #TechJourney #LeetCode
To view or add a comment, sign in
-
-
Data with Consequences: Building an Automated Penalty System 🎓💸 Day 70/100 Data is just information until you use it to drive action. 🏗️ I’ve hit Day 70 of my #100DaysOfCode journey! After finishing the core SQL modules, I wanted to build something that mirrors real-world administrative systems. Today, I built an Automated Attendance & Fine System that bridges the gap between Database Queries and Business Logic. Technical Highlights: ⚙️ Schema Evolution: Using ALTER TABLE to dynamically add new attributes (Attendance %) to an existing database. 🎯 Conditional Triggers: Fetching specific records that fall below a threshold (75% attendance) to initiate processing. 🧮 Algorithmic Penalties: Using Python to calculate dynamic fines based on the 'gap' between current data and the required benchmark. 📊 Reporting: Generating a clean, actionable summary that turns raw database rows into a financial audit. The Engineering Mindset: Whether it’s a bank charging a late fee or a gym identifying expired memberships, the logic is the same: Query -> Analyze -> Act. Do check my GitHub repository here : https://lnkd.in/d9Yi9ZsC #SQL #Python #100DaysOfCode #BTech #IILM #ComputerScience #AIML #Automation #SoftwareEngineering #LearningInPublic #WomenInTech #DataEngineering
To view or add a comment, sign in
-
-
Day 36/90 Why does "working" code often fail in production? Because edge cases like N+1 queries, duplicate data, and inaccurate stats don't show up when you're testing with just two users. Today I refactored the Course Module to fix these foundation issues and make sure the backend is actually ready for real-world use. Day 36 was spent fixing errors in the code and making database queries faster. Instead of adding new parts to the project, the focus was on making sure the existing code works correctly under different conditions. This meant looking at how the database handles information and closing gaps where incorrect data could get through. Backend: • Security Hardening: Replaced manual database lookups with self.get_object() to standardize permission and error handling. • Database Integrity: Implemented a conditional UniqueConstraint that allows re-enrollment while still preventing duplicate active records. • Performance Gains: Applied select_related and prefetch_related to kill N+1 queries in student and teacher listings. • SQL Annotations: Offloaded math for question counts and course statistics to the database using annotations. • Data Accuracy: Updated counting logic to exclude soft-deleted records, fixing inflated statistics on the dashboard. • API Reliability: Disabled page size overrides and used local paginator instances to keep API results consistent with documentation. • Data Migration: Created a RunPython migration script to convert legacy string statuses to new character codes without data loss. • Response Cleanup: Refactored the teacher endpoint to return direct resources and resolved field conflicts in nested serializers. At what stage of a project do you stop adding new features to focus entirely on refactoring and hardening the code? #Day90Challenge #Django #Python #Backend #BuildInPublic
To view or add a comment, sign in
-
-
Most people use Claude Code like a smarter autocomplete. That's not what it is. If you structure your repo correctly, Claude Code operates more like a disciplined junior engineer — one that reads the docs before touching anything, follows your conventions, guards against dangerous operations, and leaves a clean audit trail after every session. The difference isn't the model. It's the project structure. Here's what actually matters: 1. CLAUDE.md — your AI onboarding doc. Client context, architecture diagram, coding conventions, known gaps. Auto-loaded every session. 2. A session brief (read.md) — what today's focus is, what was decided last time, what's locked. Prevents you repeating the same discovery work twice. 3. Slash commands — package your multi-step workflows as markdown files. /add-bronze-object, /add-gold-transform, /check-pipeline-status. One command, done correctly every time. 4. Hooks — Python scripts that intercept Claude before it runs a bash command or writes a file. Block destructive CLI calls. Catch bad SQL. Surface a git diff on exit. 5. Discovery docs — let Claude query your actual source DB and document what it finds. Real column names, real data patterns, real gotchas. No guesswork in the SQL. I ran this setup on a full Snowflake medallion pipeline — MSSQL source, Bronze → Silver → Gold, 25 objects. 25/25 built. 0 failures. One session. I also wrote a section on prompt pollution — what happens when vague or exploratory prompts silently contaminate your session context and why it's so hard to catch. Worth reading if you use any LLM in your data work. #DataEngineering #SnowflakeDB #ClaudeCode #ETL #ArtificialIntelligence #Python #DataPipeline #MLOps Full article 👇 https://lnkd.in/gc7tAXDA
To view or add a comment, sign in
-
Day 21/100 of #100DaysOfCode 💻 Today's problem made me think in trees. 🌳 Tree Node:- Given a tree structure in a table, classify each node as Root, Inner, or Leaf. The logic: Root → has no parent (p_id IS NULL) Inner → appears as a parent of someone else Leaf → everything else I didn't get it right on the first try. 😅 My first attempt used "WHERE p_id = 1" inside the subquery, which hardcodes the root and breaks for any other tree structure. Wrong logic, wrong output. Then I stepped back and thought about it properly (Code is given in the image) 💡 The fix is instead of hardcoding, ask "does this node appear as a parent of anyone?" If yes → Inner. That's the real tree logic. Never hardcode what SQL can figure out dynamically. 🧩 Trial and error is part of the process. The wrong attempt taught me more than the right one. 😄 #SQL #100DaysOfCode #LearningInPublic #DataAnalytics #Consistency #DevJourney #LeetCode
To view or add a comment, sign in
-
-
🚀 Day 10/10 — Optimization Series End-to-End Mini Data Pipeline 👉 Basics are done. 👉 Now we move from working code → optimized code. So far, you learned: SQL optimization Python best practices Configs & environments Now… 👉 Let’s connect everything into a real pipeline 🔹 What is an End-to-End Pipeline? 👉 A complete flow: Ingest → Transform → Store → Automate 🔹 Example Flow import requests import pandas as pd import json # Load config with open("config.json") as f: config = json.load(f) # Step 1: Ingest (API) data = requests.get(config["api_url"]).json() # Step 2: Transform df = pd.DataFrame(data) df = df.dropna() # Step 3: Store df.to_csv(config["output_path"], index=False) 🔹 Pipeline Architecture 👉 API → Python → Data Cleaning → Storage 🔹 Where Optimization Applies SQL → fast queries Python → clean structure Config → flexibility Env → security 🔹 Why This Matters Real-world data engineering Production-ready systems Scalable pipelines 🔹 Real-World Use 👉 ETL pipelines 👉 Data ingestion systems 👉 Analytics workflows 💡 Quick Summary Pipeline = everything working together 💡 Something to remember Individual skills are good… Connected systems are powerful. #SQL #Python #DataEngineering #LearningInPublic #TechLearning
To view or add a comment, sign in
-
-
#sqldiaries Writing multiple lines of code used to feel intimidating — like staring at a wall with no clear way over it. But something changed when I started breaking problems into smaller steps. Instead of trying to “solve everything at once,” I focused on understanding what’s actually happening in the problem. The clearer the problem became, the simpler the solution felt. LeetCode questions that once looked complex started turning into a series of logical decisions. Step by step. Piece by piece. And that’s when it clicked: Real-world data is messy, unpredictable, and rarely comes in a clean format. So the real skill isn’t just writing code — it’s thinking clearly, structuring chaos, and building solutions one logical block at a time. That’s how you go from feeling stuck… to actually solving problems.
To view or add a comment, sign in
-
Explore related topics
- How to Solve Real-World SQL Problems
- How to Use SQL Window Functions
- Patterns for Solving Coding Problems
- Build Problem-Solving Skills With Daily Coding
- How to Master SQL Techniques
- How to Use SQL QUALIFY to Simplify Queries
- How to Use Qualify Clause With Window Functions
- SQL Learning Roadmap for Beginners
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development