Day 15 of My Python Full Stack Journey 🚀 Today’s focus was on SQL special operators — I learned how to use IN, LIKE, and BETWEEN operators effectively. These operators make data retrieval much more flexible: IN helps filter results from a specific list of values. LIKE is great for pattern-based searching. BETWEEN simplifies range-based queries. It’s interesting how such small keywords can make SQL queries more powerful and readable. Step by step, I’m getting more comfortable handling data with precision. #SQL #Database #PythonFullStack #LearningJourney #CodeEveryday #WebDevelopment
Mastering SQL operators for data retrieval
More Relevant Posts
-
Today I revised some of my SQL concepts and practiced a few Python loops to strengthen my logic-building skills. Here’s a quick glimpse - SQL Practice: Created a View (vw_Category_Profit) using CTE + Subquery to calculate total revenue and total cost per category. Then built another query using two CTEs to calculate Total Profit and extract the Top 3 categories by profit. It’s amazing how much clarity comes when you connect concepts like CTEs, Views, and Joins together! - Python Practice: Nested for loops to print pattern combinations Practiced looping through two lists (Colors & Sizes) Wrote a while loop and a limited-attempt for loop to build simple user input validation logic Each day I try to connect what I know with what I learn new. SQL builds structure; Python builds logic — together, they’re the backbone of Data Analytics. #SQL #Python #DataAnalytics #LearningJourney #ContinuousLearning #CareerGrowth #LinkedInLearning #PracticeMakesPerfect #CTE #View #Loops
To view or add a comment, sign in
-
What??? Modern SQL engines draw fractals faster than Python?!? Just out of curiosity, I built a tiny #Benchmark that calculates a Mandelbrot fractal in plain SQL using #DataFusion and #DuckDB – no loops, no UDFs, no procedural code. I honestly expected it to crawl. But the results are … surprising: GPU (unfair, but true) 0,00077 sec (close to 0x) Numpy (highly optimized) 0,623 sec (0,83x) 🥇DataFusion (SQL) 0,797 sec (baseline) 🥈DuckDB (SQL) 1,364 sec (±2x slower) Python (very basic) 4,428 sec (±5x slower) 🥉 SQLite (in-memory) 44,918 sec (±56x times slower) Turns out, modern SQL engines can draw fractals faster than Python – and Fractals are actually a fun way to benchmark recursion and query optimizers. Finally a great exercise to improve your SQL skills too. Try it yourself (GitHub repo): https://lnkd.in/eUR7ryWw Any volunteers to prove DataFusion isn’t the fastest fractal SQL artist in town? Thanks to Ulrich Ludmann for the DataFusion implementation and Jakub Jirak for the lightning fast Mac GPU implementation as the ultimative reference. And now you know also what #BARC analysts do at night 😉
To view or add a comment, sign in
-
-
Speed up your Frappe queries with frappe.get_cached_value() If you’re repeatedly fetching the same field value from the database, don’t use frappe.db.get_value() every time — it hits the database on each call. Instead, use frappe.get_cached_value() . It stores the result in memory (cache) and returns it faster on the next request. When to use it? Use frappe.get_cached_value() when :- Fetching non-changing fields like settings, configs, defaults You want better performance with fewer DB calls Accessing single fields from large DocTypes When NOT to use :- When the data changes frequently When you need fresh DB values every time When fetching multiple rows — use frappe.get_all() or get_list() instead Use caching smartly — small optimizations add up in big Frappe apps 💪 #Frappe #ERPNext #Backend #Performance #OpenSource #Python Rushabh MehtaHussain NagariaEjaaz KhanAditya HaseSherin K RRitvik SardanaManish DipankarFrappeefeone
To view or add a comment, sign in
-
-
👀 Data isn’t just numbers — it’s how we tell stories that matter. Found this interesting comparison of Excel vs SQL vs Python (Pandas) 👇 It’s amazing how each tool can do similar tasks but in its own unique way. 💡 Here’s what I noticed: -- Excel → Great for quick analysis and small datasets -- SQL → Best for handling and querying structured data -- Python (Pandas) → Ideal for automation and advanced analysis No matter which tool we use, the goal is always the same — to find insights that make an impact. #DataAnalytics #Excel #SQL #Python #Pandas #DataScience #AnalyticsCommunity #LearningJourney #KeepLearning #DataDriven #Upskilling
To view or add a comment, sign in
-
-
🚀 DuckDB vs Polars vs Pandas Python’s data ecosystem is evolving fast, and three tools are reshaping how we work with analytics: Pandas, Polars, and DuckDB. The real power is in knowing when to use which. 🐼Pandas is the comfort food of data analysis everyone knows it, everyone uses it. Pandas remains the most widely adopted library for data manipulation. Pandas is single-threaded, it struggles on large datasets. ⚡ Polars is build with Rust, It's blazingly fast, uses all your CPU cores, and the syntax is similar to Pandas and it scales far better. 🦆 DuckDB s an embedded OLAP database that excels at complex queries, joins, and aggregations all without setting up a server. It integrates naturally with Pandas, Polars and even Parquet files on disk. ✅ So… : If you want simplicity and Small datasets → Pandas. If you want performance, scalability and Medium / large datasets → Polars. If you want SQL analytics and query optimization →DuckDB.
To view or add a comment, sign in
-
A few months ago, I spent hours cleaning a messy dataset... Half the time I was in SQL, the other half in Python. At one point, I actually asked myself — “Which one’s better for cleaning data?” Here’s what I learned SQL is amazing for quick, large-scale cleaning. Filtering duplicates, handling NULLs, standardizing formats — it’s fast and clean. Python, on the other hand, is perfect for complex stuff. When I need custom logic, pattern fixing, or automation — Pandas just does the job. So which one’s better? Honestly, neither alone. The real power is when you 𝐮𝐬𝐞 𝐛𝐨𝐭𝐡. Start with SQL for structured prep. Then switch to Python for deeper transformations and automation. That combo saves hours — and gives you cleaner, more reliable insights. Clean data isn’t just a technical skill. It’s what separates good analysts from great ones. #DataAnalytics #Python #SQL #DataCleaning #CareerGrowth
To view or add a comment, sign in
-
-
From SQL to Python, If you’ve ever switched between SQL and Python for data analysis, you know the pain of translating queries into pandas syntax. That’s why I love this quick reference guide It shows how common SQL operations map directly to Python pandas from filtering and grouping to joins and unions. Here are a few gems: 🔹 WHERE → df[df['column'] == 'value'] 🔹 ORDER BY → df.sort_values(by='column') 🔹 JOIN → pd.merge(table1, table2, on='key') 🔹 UNION ALL → pd.concat([table1, table2]) Simple. Powerful. Pythonic. Save this post for your next data project and make switching between SQL and Python effortless.
To view or add a comment, sign in
-
-
🚀 My Journey from Excel VBA to Python — and Why I Still Respect VBA Not long ago, I decided to convert one of my old Excel VBA automations into Python. The task sounded simple — just extract the “Account” column from a batch of CSV files. But as I started running the script… things didn’t go as smoothly as I expected. Some files had extra spaces in the headers. Some used different delimiters. And before I knew it, Python kept throwing “No Account column found.” At that moment, I couldn’t help but smile — because if I had done this in VBA, it would have worked instantly inside Excel, no setup, no debugging. That’s when I realized something important: VBA might be old, but it’s still incredibly practical. ✅ It’s built right inside Excel — no installation, no dependencies. ✅ It’s perfect for quick automations and data clean-ups. ✅ And most importantly, many finance and business users can understand it without needing to learn a new language. Don’t get me wrong — I love Python. It’s more powerful, scalable, and can handle massive data far beyond Excel’s limits. But VBA still shines when you need something immediate and integrated. So instead of asking “VBA or Python?”, maybe the better question is — “Which one helps me solve this problem faster, today?” Sometimes the best solution isn’t the newest one — it’s the one that simply gets the job done. ⚙️
To view or add a comment, sign in
-
In SQL,we say >>> SELECT * FROM Books WHERE Price > 500; And Python calmly replies >>> df[df['Price'] > 500] Both are asking for the same thing just speaking different dialects of data. That's when it hit me - SQL and Python aren't competitors, they're teammates. SQL is like the librarian super organized, methodical, and precise. Python? That's the storyteller - it takes what SQL finds and weaves it into insights, visuals, and models. When I first learned Python, I kept comparing everything to SQL SQL's WHERE became Python's [ ] filter. SQL's GROUP BY became df.groupby( ). SQL's COUNT(*) turned into len(df) or df.shape[0]. SQL's ORDER BY looked like df.sort_values( ). And once those dots connected - it suddenly stopped feeling difficult. I stopped memorizing, and started translating. Because at its core, both SQL and Python are just ways of asking questions from data one in a structured tone, and the other in a conversational one. ♻️Repost if you still double-check whether it's ORDER BY or sort_values( )😁 #linkedinforcreators #linkedincreators
To view or add a comment, sign in
-
-
Browser Activity Tracker — Exploring Web Data with Python 🚀 I’m excited to share my latest project: a Browser Activity Tracker that analyses and visualises daily web usage patterns using Python. - What it does: Extracts data from local browser history Processes and stores activity data in SQLite Generates visual reports in Jupyter Notebook using Pandas and Matplotlib Highlights trends such as top-visited domains, frequency, and time spent online ⚙️ Tech Stack: Python · SQLite · Pandas · Matplotlib · Jupyter Notebook This project helped me explore data extraction, automation, and quantitative analysis, showing how everyday digital activity can be transformed into meaningful insights. - I’d love to hear feedback from others who’ve worked on similar data analytics or automation projects — what would you add or improve? 🔗 https://lnkd.in/eiwX3JpJ #DataAnalytics #Python #SQL #DataVisualization #Automation #PortfolioProject #LearningByDoing #DataScience
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development