Today I revised some of my SQL concepts and practiced a few Python loops to strengthen my logic-building skills. Here’s a quick glimpse - SQL Practice: Created a View (vw_Category_Profit) using CTE + Subquery to calculate total revenue and total cost per category. Then built another query using two CTEs to calculate Total Profit and extract the Top 3 categories by profit. It’s amazing how much clarity comes when you connect concepts like CTEs, Views, and Joins together! - Python Practice: Nested for loops to print pattern combinations Practiced looping through two lists (Colors & Sizes) Wrote a while loop and a limited-attempt for loop to build simple user input validation logic Each day I try to connect what I know with what I learn new. SQL builds structure; Python builds logic — together, they’re the backbone of Data Analytics. #SQL #Python #DataAnalytics #LearningJourney #ContinuousLearning #CareerGrowth #LinkedInLearning #PracticeMakesPerfect #CTE #View #Loops
Revising SQL and Python to strengthen logic and data analytics skills
More Relevant Posts
-
SQL vs Pandas: Both are powerful for data analysis — just used differently 👇 🔹 SQL → Works best for querying large databases. 🔹 Pandas → Great for data manipulation in Python. Example: SQL: SELECT AVG(salary) FROM employees; Pandas: df['salary'].mean() Different tools, same goal — turning data into insights! 📊 #SQL #Pandas #Python #DataAnalytics #LearningEveryday
To view or add a comment, sign in
-
🧩 Pandas merge() vs SQL JOIN: Same Logic, Different Syntax If you understand SQL joins, you already understand most of what pandas.merge() does. Both are designed to combine tables based on shared keys — the difference is just in the syntax. 🎯 INNER JOIN — keeps only matching records from both tables. ⬅️ LEFT JOIN — keeps all rows from the left, and matching ones from the right. ➡️ RIGHT JOIN — keeps all rows from the right, and matching ones from the left. 🌐 FULL OUTER JOIN — keeps everything from both sides, matched or not. ➰ CROSS JOIN — gives every possible combination (no key needed). It’s the same logic you use in SQL, but with the flexibility of Python. 💡 Pro tip: You can join on multiple columns, rename overlapping fields, or even merge on columns with different names using left_on and right_on. Mastering merge() makes it easy to move between SQL thinking and Python analysis — a must-have skill for any data professional. 👉 Do you find pandas.merge() easier or more confusing than SQL joins? #Python #Pandas #SQL #DataAnalytics #DataScience #CodingTips #Learning
To view or add a comment, sign in
-
Excited to share a quick comparison of Excel, SQL, and Python (Pandas) for data tasks. Whether you're loading data, filtering rows, aggregating, or visualizing, each tool shines in its own way. • Excel: Great for quick tasks like sorting or Pivot Tables. • SQL: Perfect for querying databases and joins. • Python (Pandas): Ideal for advanced analysis and automation. #DataAnalytics #SQL #Python #Excel #Pandas #DataScience #AnalyticsTools #CareerDevelopment #DataDriven #LearningAnalytics
To view or add a comment, sign in
-
-
From SQL to Python, If you’ve ever switched between SQL and Python for data analysis, you know the pain of translating queries into pandas syntax. That’s why I love this quick reference guide It shows how common SQL operations map directly to Python pandas from filtering and grouping to joins and unions. Here are a few gems: 🔹 WHERE → df[df['column'] == 'value'] 🔹 ORDER BY → df.sort_values(by='column') 🔹 JOIN → pd.merge(table1, table2, on='key') 🔹 UNION ALL → pd.concat([table1, table2]) Simple. Powerful. Pythonic. Save this post for your next data project and make switching between SQL and Python effortless.
To view or add a comment, sign in
-
-
A few months ago, I spent hours cleaning a messy dataset... Half the time I was in SQL, the other half in Python. At one point, I actually asked myself — “Which one’s better for cleaning data?” Here’s what I learned SQL is amazing for quick, large-scale cleaning. Filtering duplicates, handling NULLs, standardizing formats — it’s fast and clean. Python, on the other hand, is perfect for complex stuff. When I need custom logic, pattern fixing, or automation — Pandas just does the job. So which one’s better? Honestly, neither alone. The real power is when you 𝐮𝐬𝐞 𝐛𝐨𝐭𝐡. Start with SQL for structured prep. Then switch to Python for deeper transformations and automation. That combo saves hours — and gives you cleaner, more reliable insights. Clean data isn’t just a technical skill. It’s what separates good analysts from great ones. #DataAnalytics #Python #SQL #DataCleaning #CareerGrowth
To view or add a comment, sign in
-
-
🚀 New Project: Data Analysis in SQL using Pandas I’m excited to share my latest project where I performed data analysis using SQL-style queries within Python. For this project, I used a synthetic NHS dataset containing 100,000 records, which I cleaned earlier using Pandas to make it ready for analysis. This project is a continuation of my previous work on Exploratory Data Analysis (EDA) in Pandas — but this time, I focused more on the analytical and SQL aspects. Here’s what I did: 🔹 Used Pandas to run SQL-like queries in Python 🔹 Solved multiple real-world, scenario-based queries (like identifying trends, insights, and optimization cases) 🔹 Showcased how large datasets can be efficiently analyzed using SQL logic in Python 📺 YouTube Video: https://lnkd.in/dDYhV3_T I’ve also uploaded the complete code and dataset on my GitHub so anyone can try it out. 📂 GitHub: https://lnkd.in/dhyjBThH Always open to feedback, ideas, and collaborations! #Python #SQL #Pandas #DataAnalysis #NHSData #SyntheticData #DataScience #MachineLearning #PythonProjects #GitHub #LinkedIn #Analytics #Coding
I Used SQL in Python to Analyze Data! (Full Project Walkthrough)
https://www.youtube.com/
To view or add a comment, sign in
-
📦 What Is Pandas? Pandas is an open-source Python library designed for data manipulation and analysis. It makes working with structured data fast, flexible, and intuitive — especially if you're dealing with CSV files, Excel sheets, SQL tables, JSON, or APIs. The two core data structures in Pandas are: Series: A 1D labeled array (like a column) DataFrame: A 2D labeled data structure (like a full spreadsheet or SQL table)
To view or add a comment, sign in
-
1️⃣ Data Acquisition using Pandas Caption: 🚀 Exploring Data Acquisition with Pandas! Under the guidance of Prof. Ashish Sawant, I explored how to import and manage datasets efficiently using Python’s Pandas library. Data acquisition is the foundation of every data-driven project. I practiced reading data from various sources like CSV, Excel, JSON, and SQL. Also learned to inspect data using .head(), .info(), and .describe(). Clean and structured data is the first step toward meaningful analysis. This practical gave me a clear understanding of how data flows into the analytics pipeline. For more info,you can visit :- GitHub :-https://lnkd.in/edWY72Hg G drive:https://lnkd.in/ewkPtNtH #DataScience #Pandas #Python #DataAcquisition #LearningByDoing
To view or add a comment, sign in
-
👀 Data isn’t just numbers — it’s how we tell stories that matter. Found this interesting comparison of Excel vs SQL vs Python (Pandas) 👇 It’s amazing how each tool can do similar tasks but in its own unique way. 💡 Here’s what I noticed: -- Excel → Great for quick analysis and small datasets -- SQL → Best for handling and querying structured data -- Python (Pandas) → Ideal for automation and advanced analysis No matter which tool we use, the goal is always the same — to find insights that make an impact. #DataAnalytics #Excel #SQL #Python #Pandas #DataScience #AnalyticsCommunity #LearningJourney #KeepLearning #DataDriven #Upskilling
To view or add a comment, sign in
-
-
What??? Modern SQL engines draw fractals faster than Python?!? Just out of curiosity, I built a tiny #Benchmark that calculates a Mandelbrot fractal in plain SQL using #DataFusion and #DuckDB – no loops, no UDFs, no procedural code. I honestly expected it to crawl. But the results are … surprising: GPU (unfair, but true) 0,00077 sec (close to 0x) Numpy (highly optimized) 0,623 sec (0,83x) 🥇DataFusion (SQL) 0,797 sec (baseline) 🥈DuckDB (SQL) 1,364 sec (±2x slower) Python (very basic) 4,428 sec (±5x slower) 🥉 SQLite (in-memory) 44,918 sec (±56x times slower) Turns out, modern SQL engines can draw fractals faster than Python – and Fractals are actually a fun way to benchmark recursion and query optimizers. Finally a great exercise to improve your SQL skills too. Try it yourself (GitHub repo): https://lnkd.in/eUR7ryWw Any volunteers to prove DataFusion isn’t the fastest fractal SQL artist in town? Thanks to Ulrich Ludmann for the DataFusion implementation and Jakub Jirak for the lightning fast Mac GPU implementation as the ultimative reference. And now you know also what #BARC analysts do at night 😉
To view or add a comment, sign in
-
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development