📐 SQL practice: weighted averages and data type precision Continuing with DataLemur SQL challenges, I worked on computing the mean number of items per order — using aggregated data instead of raw rows. This required calculating a weighted average, where each item count is multiplied by its number of occurrences before dividing by the total number of orders. One key detail in this problem is handling data types correctly. Without explicit casting, integer division would truncate the result and lead to an incorrect mean. By casting the result to a numeric type before rounding, the calculation preserves the expected precision. The solution was accepted ✅, and it was a good reminder that correctness in SQL isn’t just about logic — it also depends on how the database evaluates expressions. This type of pattern shows up frequently when working with pre-aggregated data, where reconstructing metrics requires careful handling of weights and precision. Thanks to @Nick Singh and the DataLemur team for the continued practice. And as always, I’m very grateful to @Luke Barousse — much of the SQL and PostgreSQL foundation I rely on comes from his teaching: [https://lnkd.in/dZwd87sd) 17 challenges in, and continuing to focus on writing queries that are not just correct, but numerically reliable. If you’re also working through SQL interview-style problems, I’ve been using DataLemur — happy to share a referral if useful. #SQL #PostgreSQL #DataEngineering #Analytics #LearningInPublic
SQL weighted averages and data type precision
More Relevant Posts
-
💊 SQL practice: turning business definitions into ranked metrics Continuing with DataLemur SQL challenges, I worked on a problem focused on identifying the most profitable products based on sales data. The task was to compute total profit per drug, defined as: total_sales - cogs (i.e. Cost of Goods Sold), and then rank the top 3 products by profitability. While the query itself is straightforward, the key step is correctly translating the business definition (profit) into a reliable calculation and then ordering the results accordingly. The solution was accepted ✅, and it’s a good reminder that many real-world data problems are less about complexity and more about correctly modeling the metric being asked for. This kind of pattern shows up frequently when working with financial or product data, where derived metrics (profit, margin, growth) drive decisions. Thanks to @Nick Singh and the DataLemur team for the continued practice. And as always, I’m very grateful to @Luke Barousse — much of the SQL and PostgreSQL foundation I rely on comes from his teaching: [https://lnkd.in/dZwd87sd) 18 challenges in, continuing to focus on expressing business logic clearly through SQL. If you’re also working through SQL interview-style problems, I’ve been using DataLemur — happy to share a referral if useful. #SQL #PostgreSQL #DataEngineering #Analytics #LearningInPublic
To view or add a comment, sign in
-
-
📊 SQL practice: building a histogram from raw activity data Continuing with DataLemur SQL challenges, I worked on a problem that involved analyzing how employees interact with a database by constructing a histogram of query activity. The goal was to determine how many employees executed N unique queries during a given time window (Q3 2023), including those with zero activity. I approached this in stages: • filtering query activity within the time window • counting distinct queries per employee • and then re-aggregating those results to build the final distribution Using a LEFT JOIN ensured that employees with no activity were included, which is critical when working with real-world datasets where absence of data is meaningful. The solution was accepted ✅, and it reinforced a pattern I’m seeing often: transforming granular event data into higher-level summaries that can support analysis and decision-making. This type of problem feels very aligned with analytics and data engineering workflows, where building reliable intermediate datasets is just as important as the final result. Thanks to @Nick Singh and the DataLemur team for the continued practice. And as always, I’m very grateful to @Luke Barousse — much of the SQL and PostgreSQL foundation I rely on comes from his teaching: [https://lnkd.in/dZwd87sd) 15 challenges in, and continuing to focus on writing queries that scale from raw events to structured insights. If you’re also working through SQL interview-style problems, I’ve been using DataLemur — happy to share a referral if useful. #SQL #PostgreSQL #DataEngineering #Analytics #LearningInPublic
To view or add a comment, sign in
-
-
What I Learned in My SQL Class Yesterday Yesterday’s class was all about SQL (Structured Query Language), and it was a really insightful session for me as I continued my journey into data analysis. 🔍 What is SQL? SQL is a language used to interact with databases—it allows us to store, retrieve, and manage data efficiently. 💡 Key Things I Learned: ✔️ Uses of SQL Managing and organizing data Retrieving specific information from databases Updating and deleting records Supporting data-driven decisions 🔑 Understanding Keys (Very Important!) Primary Key: A unique identifier for each record in a table (no duplicates, no null values) Foreign Key: A field that connects one table to another, helping to maintain relationships between tables 🗂️ Database Schema I also learned about schema, which is basically the structure or blueprint of a database. It defines how tables, fields, and relationships are organized. This class helped me better understand how databases are structured and how data is connected behind the scenes. I’m excited to keep building my SQL skills and apply them in real-world scenarios! 🚀 #SQL #DataAnalytics #LearningJourney #TechSkills #Database #CareerGrowth Omolola Okebiorun TechCrush
To view or add a comment, sign in
-
-
✨ Day 54 – Introduction to SQL Today marks the beginning of my journey into SQL — the backbone of data management and analysis 📊 🔹 What is SQL? SQL (Structured Query Language) is used to communicate with databases. It helps in storing, retrieving, and managing data efficiently. 🔹 Why SQL is Important? Almost every organization relies on databases, making SQL a must-have skill for data professionals. 🔹 Basic SQL Commands: ✔ SELECT – Retrieve data from a database ✔ FROM – Specify the table ✔ WHERE – Filter records ✔ ORDER BY – Sort data ✔ LIMIT – Control the number of results 🔹 Types of SQL: 🔸 DDL (Data Definition Language) – CREATE, ALTER, DROP 🔸 DML (Data Manipulation Language) – INSERT, UPDATE, DELETE 🔸 DQL (Data Query Language) – SELECT 🔸 DCL (Data Control Language) – GRANT, REVOKE 📌 Takeaway: SQL is the foundation of working with data. Mastering it opens doors to data analysis, data science, and backend development. #SQL #DataAnalytics #LearningJourney #Databases #TechSkills #FrontlineMedia
To view or add a comment, sign in
-
-
Most people learn SQL. Fewer people learn to think in SQL. There's a difference. Learning SQL means you can write a JOIN or a GROUP BY when you see the problem coming. Thinking in SQL means you look at a messy business question and your brain automatically breaks it into layers like what's the grain of this data, where does it need to be aggregated, what's the most efficient path to get there. That shift didn't happen for me in a classroom. It happened when I was building ETL pipelines and a query that looked perfectly fine was silently returning duplicate rows because I hadn't accounted for a many-to-many join upstream. A few things that actually moved the needle: → Writing CTEs instead of subqueries that forces you to name each logic step, which forces you to understand it. → Thinking about indexes before writing joins on large tables. → Reading query execution plans, not just query results. The last one is underrated. The result can look correct and the query can still be costing you 10x more than it should. SQL is one of those skills where the gap between "I know it" and "I actually know it" is wider than most people admit. What's the SQL concept that took you the longest to really click? #SQL #DataAnalytics #DataEngineering
To view or add a comment, sign in
-
From simple queries to real-world SQL thinking 🚀 ---------------------------------------------------------------- Today I solved a problem where I had to analyze transactions data and report: • Total transactions • Approved transactions • Total amount • Approved amount • Grouped by month and country At first, it looked like a basic aggregation problem… but it actually required combining multiple concepts: ✔ Extracting month from date ✔ Grouping on multiple columns ✔ Conditional aggregation ✔ Writing clean and scalable SQL 🧠 Key learning: Instead of writing multiple queries, everything can be solved in a single query using conditional aggregation. 💡 One powerful trick: Using conditions inside SUM: SUM(state = 'approved') This helped me count approved transactions efficiently. 💻 Solution: SELECT DATE_FORMAT(trans_date, '%Y-%m') AS month, country, COUNT(*) AS trans_count, SUM(state = 'approved') AS approved_count, SUM(amount) AS trans_total_amount, SUM(CASE WHEN state = 'approved' THEN amount ELSE 0 END) AS approved_total_amount FROM Transactions GROUP BY DATE_FORMAT(trans_date, '%Y-%m'), country; 🚀 This problem helped me strengthen: SQL aggregation • Data analysis thinking • Real-world query logic Learning SQL step by step and sharing the journey 👇 #SQL #DataAnalytics #LearningInPublic #LeetCode #100DaysOfCode
To view or add a comment, sign in
-
-
🚀 From Writing SQL Queries → Thinking Like a Data Professional Most SQL problems look easy… until you try to optimize them. Today I worked on a simple problem: 🧠 Problem Statement: Fetch ITEM_NAME and PRICE from SHOP_1 and SHOP_2 where PRICE > 25. 🧩 The obvious solution SELECT ITEM_NAME, PRICE FROM SHOP_1 WHERE PRICE > 25 UNION ALL SELECT ITEM_NAME, PRICE FROM SHOP_2 WHERE PRICE > 25; ✔ Correct ✔ Straightforward But… is it the best way? ⚡ The optimized mindset SELECT ITEM_NAME, PRICE FROM ( SELECT ITEM_NAME, PRICE FROM SHOP_1 UNION ALL SELECT ITEM_NAME, PRICE FROM SHOP_2 ) AS COMBINED WHERE PRICE > 25; 🔍 What changed? Instead of solving the problem… I focused on improving the approach: 🔹 Reduced repeated filtering 🔹 Made it scalable (works for multiple tables) 🔹 Improved readability 💡 Real Learning Writing SQL isn’t just about getting the output. It’s about: 🔹Thinking in sets 🔹Writing scalable logic 🔹Making queries easy to maintain 🏆 Final Thought 👉 Anyone can write a working query. 👉 But strong data analysts write queries that scale. 💬 Curious — would you filter before or after combining data? #SQL #DataAnalytics #DataAnalyst #Learning #InterviewPrep #DataEngineering #Optimization Coding Ninjas Codebasics
To view or add a comment, sign in
-
BLOG 12 — SQL Subqueries Explained with Examples In this blog, I explained SQL Subqueries, which are used to write queries inside another query to solve complex problems. Topics covered: ✔ What is a Subquery ✔ Why we use Subqueries ✔ Where we use Subqueries (SELECT, WHERE, FROM) ✔ Types of Subqueries • Single Row • Multiple Row • Multiple Column • Correlated • Non-Correlated ✔ How subqueries work step by step ✔ When to use Subqueries vs JOINS Subqueries are very powerful and widely used in real-world data analysis and SQL interviews. Read here: https://lnkd.in/d5HJAeVR Grateful to Innomatics Research Labs for providing practical exposure and structured learning. Excited to continue building strong foundations in SQL, Data Analytics, and Data Science. Special thanks to the team for their guidance and support: Co-Founder & CEO – Kalpana Katiki Reddy Regional Head – VAMSI KRISHNA KANAGALA Trainer – Swathi Reddy Thatikonda Abhilash Manikanta Mentors: Gogula Vinay Koduri Srihari Dinesh Bodigadla Rahul Janjirala Program Manager – Raghu Ram Aduri Placement Team: Sigilipelli Yeshwanth Sravani Burma Rishita Bhargavi K Eswarkarthic M SQL | Python | Pandas | Data Analytics | Statistics #SQL #DataAnalytics #Database #LearningJourney #InnomaticsResearchLabs #CareerGrowth #Beginner #Portfolio #100DaysOfLearning
To view or add a comment, sign in
-
🚀 Week 4 Complete – Advancing My SQL & Database Skills This week, I focused on strengthening my understanding of advanced SQL concepts and how databases are managed in real-world environments. Here are the key areas I worked on: 🔹 SQL Commands & Scripting ✔ Utilized SET commands and SPOOL to control and store query outputs ✔ Understood how scripting enables efficient task automation 🔹 Pseudo Columns & Sequences ✔ Worked with ROWNUM, SYSDATE, and UID for dynamic data handling ✔ Implemented SEQUENCE (NEXTVAL, CURRVAL) to generate unique identifiers ✔ Learned how sequences are used for managing primary keys 🔹 Views, Indexes & Synonyms ✔ Created Views to simplify complex queries and improve reusability ✔ Used Indexes to enhance query performance ✔ Applied Synonyms for easier and flexible object access 🔹 Subqueries & SQL Categories ✔ Practiced subqueries to handle complex data retrieval scenarios ✔ Gained clarity on DDL, DML, DCL, and TCL operations 🔹 User Management & Security ✔ Managed database users and permissions using GRANT and REVOKE ✔ Understood the importance of access control and data security 💡 Key Takeaway: Mastering advanced SQL and database management concepts is essential for building scalable, efficient, and secure data systems. Step by step, I’m moving from basic queries → real-world database thinking 📊 Still learning. Still improving. Just getting started 💯 Grateful for the guidance and support from Praveen Kalimuthu and the Tech Data Community for the hands-on, industry-aligned experience #SQL #DataAnalytics #DataAnalyst #LearningInPublic #TechJourney #Database #OracleSQL #Analytics #DataScience #CareerGrowth #FutureDataAnalyst #100DaysOfCode #Upskill #TechCommunity #LinkedInLearning #CodingJourney #DataSkills #TDC
To view or add a comment, sign in
-
🚀 Day 1 of 30 Days SQL Challenge – What is SQL? Starting my 30 Days SQL Challenge with the very first and most important question: What is SQL? 💻 SQL stands for Structured Query Language. It is a powerful language used to communicate with databases. In simple terms, SQL helps us store, retrieve, update, and manage data efficiently. In today’s data-driven world, almost every organization works with huge amounts of data. Whether it’s a bank, an e-commerce platform, or a social media app—data is everywhere. And SQL is the tool that helps us make sense of that data. 💡 Why is SQL Important? ✔ It helps in managing large datasets ✔ Used by data analysts, data scientists, and developers ✔ Essential for decision-making in businesses ✔ Works with popular databases like MySQL, Oracle, SQL Server 📊 What can you do with SQL? • Fetch data using queries • Filter and sort information • Perform calculations • Join multiple tables • Analyze trends and patterns Learning SQL is not just about writing queries—it’s about understanding data and turning it into meaningful insights 📈 This is just the beginning… Stay tuned for Day 2! 🚀 #SQL #DataAnalytics #30DaysChallenge #LearningJourney #DataScience #Upskilling #CareerGrowth #Consistency #TechSkills
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development