Data with Consequences: Building an Automated Penalty System 🎓💸 Day 70/100 Data is just information until you use it to drive action. 🏗️ I’ve hit Day 70 of my #100DaysOfCode journey! After finishing the core SQL modules, I wanted to build something that mirrors real-world administrative systems. Today, I built an Automated Attendance & Fine System that bridges the gap between Database Queries and Business Logic. Technical Highlights: ⚙️ Schema Evolution: Using ALTER TABLE to dynamically add new attributes (Attendance %) to an existing database. 🎯 Conditional Triggers: Fetching specific records that fall below a threshold (75% attendance) to initiate processing. 🧮 Algorithmic Penalties: Using Python to calculate dynamic fines based on the 'gap' between current data and the required benchmark. 📊 Reporting: Generating a clean, actionable summary that turns raw database rows into a financial audit. The Engineering Mindset: Whether it’s a bank charging a late fee or a gym identifying expired memberships, the logic is the same: Query -> Analyze -> Act. Do check my GitHub repository here : https://lnkd.in/d9Yi9ZsC #SQL #Python #100DaysOfCode #BTech #IILM #ComputerScience #AIML #Automation #SoftwareEngineering #LearningInPublic #WomenInTech #DataEngineering
Automated Attendance Penalty System with SQL and Python
More Relevant Posts
-
🚀 Day 29/30 Days of My SQL Problem Solving Challenge 🔍 Problem: Average Time of Process per Machine -Find the average processing time for each machine. -Each process has a start and end timestamp, and the goal is to compute the average time taken per process for every machine. 🧠 Approach -Joined the table with itself on machine_id and process_id -Matched start rows with corresponding end rows -Calculated processing time using end_time - start_time -Used AVG() to compute average per machine and ROUND() for formatting 💡 Key Learning -Importance of pairing related rows (start ↔ end) before aggregation -Avoiding incorrect logic like SUM(end) - SUM(start) -Using self JOIN for event-based problems #SQL #LeetCode #DataAnalytics #CodingJourney #SDE #DailyPractice
To view or add a comment, sign in
-
-
𝗪𝗵𝗮𝘁 𝗶𝘀 𝗮 𝗧𝗲𝗺𝗽 𝗧𝗮𝗯𝗹𝗲? A Temporary Table is a table used to store data temporarily during a session. • Created inside tempdb • Automatically deleted when the session ends • Useful for handling intermediate data in complex queries 𝗧𝘆𝗽𝗲𝘀 𝗼𝗳 𝗧𝗲𝗺𝗽 𝗧𝗮𝗯𝗹𝗲𝘀 1. 𝗟𝗼𝗰𝗮𝗹 𝗧𝗲𝗺𝗽 𝗧𝗮𝗯𝗹𝗲 (#𝗧𝗲𝗺𝗽𝗧𝗮𝗯𝗹𝗲) • Prefixed with a single hash (#) • Accessible only within the current session • Automatically dropped when the session ends 🧠 Example: CREATE TABLE #Employees ( ID INT, Name VARCHAR(50) ); 2. 𝗚𝗹𝗼𝗯𝗮𝗹 𝗧𝗲𝗺𝗽 𝗧𝗮𝗯𝗹𝗲 (##𝗧𝗲𝗺𝗽𝗧𝗮𝗯𝗹𝗲) • Prefixed with a double hash (##) • Accessible across multiple sessions • Dropped only when all sessions using it are closed 🧠 Example: CREATE TABLE ##Employees ( ID INT, Name VARCHAR(50) ); 🔹 𝗪𝗵𝘆 𝗨𝘀𝗲 𝗧𝗲𝗺𝗽 𝗧𝗮𝗯𝗹𝗲𝘀? ✔ Break down complex queries into smaller steps ✔ Improve readability & debugging ✔ Store intermediate results ✔ Reuse data within a session ✔ Can improve performance in large data operations #Keys #Indexes #indexing #SqlIndex #SQL #DataEngineering #Data #DataEngineer #ETL #DataPipelines #CloudComputing #Python #TechCareers #Learning #SQLDeveloper #DataLife #TechHumor #SoftwareEngineering #Analytics #Programming #DatabaseDeveloper #Database #BusinessAnalytics #DataAnalytics #Upskilling #DBA
To view or add a comment, sign in
-
Day 13: 90-Day Coding Challenge 🚀 Today I worked on a classic SQL problem — identifying users who logged in for N consecutive days. At first glance, this looks like a simple aggregation problem, but the real challenge is detecting continuous sequences of dates without gaps. 🔍 Approach I used: • Leveraged window functions like ROW_NUMBER() • Created a pattern by subtracting row number from login date to group consecutive days • Aggregated based on this derived key to identify continuous streaks • Filtered users whose streak length ≥ N 💡 Key Insight: Instead of checking each day individually, transforming dates into groups helps detect consecutive patterns efficiently. ⚡ This is a powerful technique often used in: • User retention analysis • Streak tracking (daily active users) • Behavioral analytics Time Complexity: O(n log n) (due to sorting/window functions) Today’s learning highlights: ✅ Mastered handling consecutive patterns in SQL ✅ Practiced window functions for real-world scenarios ✅ Improved thinking around sequence detection ✅ Strengthened SQL problem-solving skills These kinds of problems really show how SQL can go beyond simple queries into analytical problem solving 🔥 Excited for Day 14! #90DaysOfCode #SQL #WindowFunctions #DataEngineering #Analytics #ProblemSolving #CodingJourney
To view or add a comment, sign in
-
🚀 From Overthinking to Clean Logic — SQL Growth Moment! Solved “Triangle Judgement” on LeetCode and this one taught me something important 💡 At first, I tried solving it using nested CASE statements and multiple conditions — it worked, but it was unnecessarily complex. Then I realized the problem boils down to a simple mathematical rule 👇 👉 A triangle is valid if: x + y > z x + z > y y + z > x 🧠 Final Clean Approach: SELECT x, y, z, CASE WHEN x + y > z AND x + z > y AND y + z > x THEN 'Yes' ELSE 'No' END AS triangle FROM Triangle; 📊 Result: ✔️ Accepted ✅ (11/11 test cases passed) ✔️ Runtime: 305 ms 🔥 Key Takeaway: Sometimes the best solution isn’t the most complex one — it’s the simplest correct logic. Learning to simplify is just as important as learning to solve 💪 #SQL #LeetCode #CodingJourney #ProblemSolving #Learning #Tech #PlacementPreparation
To view or add a comment, sign in
-
-
🚀 Day 28/100 – LeetCode SQL Challenge Today’s problem: Triangle Judgement 📌 What I learned today: How to apply real-world mathematical logic (Triangle Inequality Rule) in SQL Using CASE statements to create conditional outputs Writing clean and readable SQL queries for decision-making problems 🔍 Key Concept: A triangle is valid only if: x + y > z y + z > x x + z > y 💡 If all conditions are satisfied → “Yes” Otherwise → “No” 🧠 This problem helped me understand how SQL is not just about data retrieval, but also about applying logical conditions effectively. Consistency is the key 🔑 #Day28 #100DaysOfCode #LeetCode #SQL #Learning #CodingJourney #PlacementPreparation
To view or add a comment, sign in
-
-
🚀 Level Up Your Data Validation with Pandera! If you're working with data pipelines in Python, you already know one thing: 👉 bad data = bad decisions That’s where Pandera comes in. It is a powerful and elegant library that brings data validation to pandas DataFrames in a clean and scalable way. 💡 Why Pandera? ✅ Define schemas for your DataFrames ✅ Validate data types, ranges, and custom rules ✅ Catch errors early in your pipelines ✅ Integrate seamlessly with pandas workflows ✅ Improve reliability in production data systems 🔍 Instead of guessing if your data is clean, you can enforce it with confidence. Here’s a quick example of what you can do: - Ensure columns have the right types - Validate constraints like `age > 0` - Apply checks across entire datasets 🔥 Whether you're a Data Engineer, Data Scientist, or Data Analyst, Pandera helps you build trustworthy data pipelines. 👉 GitHub repo: https://lnkd.in/e5weFTjb #DataEngineering #Python #DataValidation #Pandas #DataQuality #MachineLearning #Analytics
To view or add a comment, sign in
-
🚀 Mastering Data Structures: Linked Lists Deep Dive 🚀 Let's unravel the mystery of Linked Lists! 🧵🔗 In simple terms, a linked list is a linear data structure where each element is a separate object called a node. These nodes are connected using pointers, forming a chain. But why should developers care? 🤔 Well, understanding linked lists is crucial for optimizing memory usage and efficiently managing data, especially when dealing with frequent insertions and deletions. It's a fundamental concept to grasp for mastering more complex data structures. Here's a breakdown to get you started: 1️⃣ Create a Node class with data and a reference to the next node. 2️⃣ Implement methods for inserting, deleting, and traversing nodes. ```python class Node: def __init__(self, data=None): self.data = data self.next = None # Placeholder for code implementation ``` Pro tip: Keep track of the head and tail nodes for faster operations! 🚴 Common mistake alert: Forgetting to update the pointers correctly when inserting or deleting nodes can lead to bugs. 🐞 Double-check your logic! What's your favorite use case for linked lists? Share below! 💬 🌐 View my full portfolio and more dev resources at tharindunipun.lk #DataStructures #LinkedLists #CodingBeginners #DeveloperTips #PythonProgramming #MemoryOptimization #CodeOptimization #CodingJourney #LearnToCode
To view or add a comment, sign in
-
-
Day 36/90 Why does "working" code often fail in production? Because edge cases like N+1 queries, duplicate data, and inaccurate stats don't show up when you're testing with just two users. Today I refactored the Course Module to fix these foundation issues and make sure the backend is actually ready for real-world use. Day 36 was spent fixing errors in the code and making database queries faster. Instead of adding new parts to the project, the focus was on making sure the existing code works correctly under different conditions. This meant looking at how the database handles information and closing gaps where incorrect data could get through. Backend: • Security Hardening: Replaced manual database lookups with self.get_object() to standardize permission and error handling. • Database Integrity: Implemented a conditional UniqueConstraint that allows re-enrollment while still preventing duplicate active records. • Performance Gains: Applied select_related and prefetch_related to kill N+1 queries in student and teacher listings. • SQL Annotations: Offloaded math for question counts and course statistics to the database using annotations. • Data Accuracy: Updated counting logic to exclude soft-deleted records, fixing inflated statistics on the dashboard. • API Reliability: Disabled page size overrides and used local paginator instances to keep API results consistent with documentation. • Data Migration: Created a RunPython migration script to convert legacy string statuses to new character codes without data loss. • Response Cleanup: Refactored the teacher endpoint to return direct resources and resolved field conflicts in nested serializers. At what stage of a project do you stop adding new features to focus entirely on refactoring and hardening the code? #Day90Challenge #Django #Python #Backend #BuildInPublic
To view or add a comment, sign in
-
-
𝔸𝕕𝕧𝕒𝕟𝕔𝕖𝕕 𝔻𝕒𝕥𝕒 𝕊𝕥𝕖𝕡 𝕋𝕖𝕔𝕙𝕟𝕚𝕢𝕦𝕖𝕤 𝗔𝗩𝗔𝗡𝗖𝗘 𝗬𝗢𝗨𝗥 𝗖𝗔𝗥𝗘𝗘𝗥 ‼️ 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗦𝗔𝗦® 𝗗𝗔𝗧𝗔 𝗦𝘁𝗲𝗽 𝗣𝗿𝗼𝗴𝗿𝗮𝗺𝗺𝗶𝗻𝗴 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 with 𝗝𝗼𝘀𝗵 𝗛𝗼𝗿𝘀𝘁𝗺𝗮𝗻 𝗪𝗲𝗱𝗻𝗲𝘀𝗱𝗮𝘆 𝗠𝗮𝘆 𝟲 | 𝟭𝟬𝑎𝑚-𝟮𝑝𝑚 𝑃𝑎𝑐𝑖𝑓𝑖𝑐 | $𝟵𝟵 To solve complex coding problems with the SAS® DATA step, one must go beyond a basic understanding of the individual statements. You need to understand how the various statements interact with each other and how their options can be leveraged to build DATA step code that provides innovative solutions to the toughest of problems. Based on Art Carpenter’s book, 𝘊𝘢𝘳𝘱𝘦𝘯𝘵𝘦𝘳’𝘴 𝘎𝘶𝘪𝘥𝘦 𝘵𝘰 𝘐𝘯𝘯𝘰𝘷𝘢𝘵𝘪𝘷𝘦 𝘚𝘈𝘚® 𝘛𝘦𝘤𝘩𝘯𝘪𝘲𝘶𝘦𝘴, this class is a must for the DATA step programmer who wants to take his or her programs to the ‘next’ level. 𝕋𝕠𝕡𝕚𝕔𝕤 𝕚𝕟𝕔𝕝𝕦𝕕𝕖: ☉ Working across multiple observations using look-ahead and look-back techniques ☉ Employing the DOW loop ☉ Taking advantage of double SET statements ☉ Working with hash objects ☉ Performing table lookups ☉ Using arrays to transpose data from columns to rows and back again ☉ Evaluating complex expressions ☉ Applying data set options ☉ Adopting new DATA step functions (and old functions with new options) 𝔸𝕟𝕕 𝕞𝕠𝕣𝕖! This course is designed to be taken by a student who has a basic understanding of the DATA step and its primary statements. The material will focus on advanced topics that will give the student a deeper understanding of the operation of the DATA step. Through examples, students will be exposed to innovative techniques for solving difficult programming problems. For 𝗠𝗼𝗿𝗲 information and to 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 ... https://lnkd.in/g__Qzpwc #SAS #DATAstep #Programming
To view or add a comment, sign in
-
-
🚀 𝗗𝗮𝘆 𝟴: 𝗧𝗼𝗱𝗮𝘆 𝗜 𝗹𝗲𝗮𝗿𝗻𝗲𝗱 𝘁𝘄𝗼 𝘃𝗲𝗿𝘆 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗰𝗼𝗻𝗰𝗲𝗽𝘁𝘀 𝗶𝗻 𝗣𝗮𝗻𝗱𝗮𝘀: 𝗖𝗼𝗻𝗰𝗮𝘁𝗲𝗻𝗮𝘁𝗲 𝗮𝗻𝗱 𝗠𝗲𝗿𝗴𝗲 🔥 📌 1. Concatenate (pd.concat) Concatenate means combining multiple DataFrames either row-wise or column-wise. It is useful when we have similar data and want to stack or join them together. 👉 Types: Row-wise (axis=0) → Adds rows Column-wise (axis=1) → Adds columns 👉 Syntax: import pandas as pd result = pd.concat([df1, df2], axis=0) # Row-wise result = pd.concat([df1, df2], axis=1) # Column-wise 📌 2. Merge (pd.merge) Merge means combining DataFrames based on a common column (key), similar to SQL JOINs. 👉 Types of Merge: Inner Join → Common data only Left Join → All data from left + matching from right Right Join → All data from right + matching from left Outer Join → All data from both 👉 Syntax: result = pd.merge(df1, df2, on='column_name', how='inner') 💡 Key Difference: Concat → Simply joins data (no key required) Merge → Joins data using a common column (like SQL) 🎯 Real-life Example: Concat → Combine monthly sales data Merge → Combine customer details with orders ✨ Learning Pandas step by step is making data handling easier and more powerful! #Day𝟴 #Pandas #DataAnalytics #Python #LearningJourney
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development