🚀 Day 12 – Python Sets (Multiple Valued Data Type) Today I explored Sets in Python — one of the powerful multiple valued data types! 🔷 🔹 What is a Set? A Set is a collection of multiple items that is: ✅ Unordered ❌ No indexing ✅ Mutable ❌ No duplicate values 📌 Sets are written using curly brackets {} my_set = {10, 20, 30, 40} print(my_set) 🔷 🔹 Important Characteristics 👉 Duplicates are automatically removed numbers = {1, 2, 2, 3, 4} print(numbers) # Output: {1, 2, 3, 4} 👉 Cannot access using index my_set[0] # ❌ Error 👉 We can add or remove elements fruits = {"apple", "banana"} fruits.add("mango") fruits.remove("banana") 🔷 🔹 Set Operations (Mathematical Power 💪) A = {1, 2, 3} B = {3, 4, 5} ✔️ Union → Combines both sets A.union(B) ✔️ Intersection → Common elements A.intersection(B) ✔️ Difference → Unique elements A.difference(B) 🔷 🔹 When Should We Use Sets? ✨ Removing duplicate data ✨ Performing mathematical operations ✨ Faster searching (membership testing) 🌟 Day 12 Complete! Learning step by step, building strong Python fundamentals. #Python #Day12 #LearningPython #DataTypes #Sets #ProgrammingJourney 💻
Python Sets: Unordered, Mutable, No Duplicates
More Relevant Posts
-
Day 29 - Libraries in Python Python libraries are collections of pre-written code that help programmers perform common tasks such as data analysis, visualization, machine learning, and mathematical calculations more efficiently. Why Libraries are Used :- Libraries help you: - Save time - Avoid writing complex code - Perform tasks like data analysis, visualization, machine learning, etc. Example: Instead of writing a long program to analyze data, you can use a library 1) NumPy - NumPy is used for numerical computations in Python. It helps work with arrays, mathematical operations, and large numerical datasets efficiently. import numpy as np numbers = np.array([10, 20, 30, 40]) print(numbers.mean()) 2) Pandas - Pandas is used for data analysis and data manipulation. It helps work with datasets using structures like DataFrames and Series. import pandas as pd data = {"Name": ["John", "Anna", "Mike"], "Age": [23, 25, 22]} df = pd.DataFrame(data) print(df) 3) Matplotlib - Used for creating charts and graphs to visualize data. import matplotlib.pyplot as plt x = [1,2,3] y = [10,20,30] plt.plot(x,y) plt.show( ) 4) Seaborn - Seaborn is built on Matplotlib and is used to create more attractive and statistical graphs. import seaborn as sns import matplotlib.pyplot as plt sns.barplot(x=[1,2,3], y=[10,20,15]) plt.show() 5) Scikit-learn - Scikit-learn is used for machine learning and predictive analysis. #30daysofchallenge #python #libraries #analysis #data
To view or add a comment, sign in
-
Welcome to Part 7 of Data Analyst Roadmap series, Python Foundations — Start "Thinking in Data" The biggest mistake beginners make when starting Python for data analysis? Rushing straight into Pandas, fancy charts, and machine learning models without understanding the bedrock fundamentals. If you don't understand how Python thinks about data under the hood, your analysis will break. Today’s visual is Stage 1: The core building blocks before the cool stuff. Why does this matter so much? Look at the top left of the image. 🔹 In Python, 100 + 20 = 120. (Math!) 🔹 But "100" + "20" = "10020". (Text concatenation!) If you don't understand Data Types, 30% of your beginner errors will come from simple mistakes just like that. You must control Python, not let it guess what you want. Here is your checklist for "Thinking in Data": Containers: Are you storing ordered data (use a List, like a to-do list) or do you need fast, labelled lookups (use a Dictionary, like a contact book)? Repetition: Are you manually doing the same thing 10 times? Use a Loop. Reusable Logic: Are you copy-pasting the same analysis code? Wrap it in a Function so you can use it anywhere. Mastering these basics means you understand memory and control logic. Which of these fundamental concepts took you the longest to truly grasp when starting out? Loops always trip people up at first! Let me know below. #PythonForDataScience #DataAnalytics #LearningToCode #Pandas #TechSkills #DataCareers #CodingNewbie #Roadmap
To view or add a comment, sign in
-
-
I stopped using Python loops for array operations. Here’s why. I’ll be honest—I used to be a "loop person." When I first started working with large datasets, writing a Python loop just felt natural. It was easy to read and easy to write. But as my data grew, my performance tanked. I finally got tired of waiting for my code to finish and decided to time it. One single switch from a standard loop to a NumPy vectorized operation changed everything. The result? My processing time dropped from 12 seconds to 0.3 seconds. That is a 40x speedup by changing just one line of code. Here is the breakdown of what happened: import time, numpy as np data = list(range(1_000_000)) The slow way (Python Loop) start = time.time() result = [x**2 for x in data] print(f"Loop: {time.time()-start:.2f}s") # ~0.40s The fast way (NumPy Vectorization) arr = np.array(data) start = time.time() result = arr**2 print(f"NumPy: {time.time()-start:.4f}s") # ~0.003s So why is NumPy so much faster? It boils down to three things: 1. It runs on compiled C code (bypassing the slow Python interpreter). 2. It uses contiguous memory (the CPU can grab data way faster). 3. It skips the "interpreter tax" on every single element in your array. I tell my students this all the time now: If you are looping over numbers, you are probably leaving performance on the table. In ML tasks like feature scaling or distance calculations, this isn't just a "nice-to-have"—it's a requirement. New habit: Before you write 'for x in...', ask yourself if NumPy can do it in one line. Your future self (and your CPU) will thank you. What’s the biggest performance win you've found recently? I'd love to hear about it in the comments! #Python #NumPy #DataScience #MachineLearning #PerformanceOptimization
To view or add a comment, sign in
-
Most people try to learn everything in Python… and end up learning nothing. If someone asked me how to start Data Analytics with Python in 7 days, I’d focus on just 7 things. Nothing extra. No overwhelm. Just the essentials. Day 1 – Basics that matter Learn print(), variables, and lists. Do a small calculation with data so you understand how Python works. Day 2 – Explore data Use df.head() and df.describe() to open and understand any CSV file. Day 3 – Clean messy data Learn dropna() and fillna() to handle missing values. Day 4 – Real business analysis Use groupby() to answer questions like: “Which region generates the most sales?” Day 5 – Quick insights Use query() and nlargest() to filter data and find top results instantly. Day 6 – Build a mini project Complete workflow: Load → Clean → Analyze → Export insights. Day 7 – Show your work Upload the project to GitHub and share it on LinkedIn. That’s it. You now have a portfolio project, practical Python experience, and proof you can analyze real data. Simple > complicated. #DataAnalytics #Python #LearningInPublic #DataScience #SQL #CareerGrowth
To view or add a comment, sign in
-
🎓 Just completed "Hypothesis Testing in Python" on DataCamp! Solid course for anyone who wants to go beyond knowing the theory and actually implement statistical tests in Python. Here's a quick breakdown: 📚 What you'll learn across 4 chapters: 1️⃣ Hypothesis Testing Fundamentals The core workflow — one-sample proportion tests, z-scores, p-values, and false positive/negative errors. The essential foundation. 2️⃣ Two-Sample & ANOVA Tests T-tests for two groups, then ANOVA for 3+ groups — crucial for avoiding Type I error inflation from running too many t-tests. 3️⃣ Proportion Tests & Chi-Square Testing categorical data with chi-square independence and goodness-of-fit tests. Super practical for real-world survey and behavioral data. 4️⃣ Non-Parametric Tests ← my personal highlight 💡 When your data violates normality assumptions — Mann-Whitney U, Wilcoxon, Kruskal-Wallis. Often overlooked, but incredibly useful in practice. 🐍 What makes it stand out: Hands-on Python with real-world datasets. Theory meets code — which is exactly how it should be taught. 📎 I put together a free PDF revision sheet with all key code examples + a cheat sheet for choosing the right test — drop a comment if you like it! 📌 Recommended for: Data Analysts, Economists, Finance professionals, anyone making data-driven decisions. #DataScience #Python #Statistics #HypothesisTesting #DataCamp #ContinuousLearning
To view or add a comment, sign in
-
🚀 Day 10/70 – Introduction to NumPy (Entering Real Analytics) Today I started learning NumPy 📊 NumPy (Numerical Python) is a powerful library used for numerical computations in Python. It is faster and more efficient than normal Python lists for mathematical operations. 📌 Why NumPy is Important in Data Analytics? ✔ Handles large datasets efficiently ✔ Supports multi-dimensional arrays ✔ Performs fast mathematical operations ✔ Foundation for Pandas & Machine Learning 📌 Installing NumPy Python id="p4y2zn" pip install numpy 📌 Creating a NumPy Array Python id="k8s9d1" import numpy as np arr = np.array([10, 20, 30, 40]) print(arr) 📌 Basic Operations Python id="w2mx5v" print(arr + 5) # Add 5 to each element print(arr * 2) # Multiply each element print(np.mean(arr)) # Average 👉 NumPy automatically applies operations to all elements (vectorization). 📊 Why This Is Powerful? In normal Python: Python id="q1b9er" numbers = [10, 20, 30, 40] new_list = [] for num in numbers: new_list.append(num * 2) With NumPy: Python id="c7u3ks" arr = np.array([10, 20, 30, 40]) print(arr * 2) Cleaner + Faster 🔥 #Day10 #NumPy #Python #DataAnalytics #LearningInPublic #FutureDataAnalyst #70DaysChallenge
To view or add a comment, sign in
-
-
🚀Day 7/100: Python Lists & The Magic of List Comprehension Data Engineering is all about handling collections of information. In Python, we do that with Lists. Today, I explored how to create, modify, and filter lists efficiently. 1️⃣ The Basics of Lists Lists are "containers" that hold multiple items. They are versatile because they allow duplicates and can hold different types of data (heterogeneous). Key Operations I practiced: Creation: l = [] Adding Data: append() vs extend() Modifying: Using insert(), remove(), and pop() Slicing: Grabbing specific chunks of data from the list. 2️⃣ List Comprehension (The Data Engineer's Shortcut) This was the highlight of Day 7. List comprehension allows us to create a new list by applying logic to an existing one in just one line of code. It’s cleaner, faster, and very "Pythonic." The Evolution of my Code: The "Long" Way (Using a For Loop): movies = ["singam", "sachin", "petta", "boys", "veeram", "vikram"] newmovies = [] for item in movies: if item != "boys": newmovies.append(item) The "Smart" Way (List Comprehension): # Does the same thing in one line! newmovies = [item for item in movies if item != "boys"] More Examples from Today's Lab: Creating a range: [num for num in range(11)] ➡️ [0, 1, ..., 10] Filtering even numbers: [num for num in range(11) if num % 2 == 0] ➡️ [0, 2, 4, 6, 8, 10] Searching within strings: [item for item in movies if "a" in item] Why this matters for DE: When we process millions of rows, writing efficient, readable code like list comprehensions makes our data pipelines much easier to maintain. #100DaysOfCode #DataEngineering #Python #ListComprehension #CleanCode #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 5 – Python for Data Analytics Today I stepped deeper into the world of data with Python. I realized one thing — If Excel is the foundation, Python is the superpower. 💻⚡ 🔹 Why Python is important in Data Analytics? ✔ Easy to learn and versatile ✔ Handles large datasets efficiently ✔ Automates repetitive tasks ✔ Widely used in industry And the real power comes from its libraries 👇 📊 Pandas – Makes data cleaning and manipulation simple. (Filtering, grouping, transforming data easily) 🔢 NumPy – Performs fast numerical computations. Essential for calculations and mathematical operations. 📈 Matplotlib – Helps turn data into visual stories using charts and graphs. The more I learn Python, the more I understand — Data analytics is not just about analyzing data… It’s about solving real-world problems efficiently. Consistency > Motivation. Day by day, skill by skill. 🚀 💬 What was your first Python project? Tajwar Khan Ethical Learner Dr. Nitesh Saxena Dr. Rajeev Singh Bhandari @ #Day5 #Python #DataAnalytics #Pandas #NumPy #Matplotlib #LearningJourney #DataScience
To view or add a comment, sign in
-
-
🚀Day 7/100: Python Lists & The Magic of List Comprehension Data Engineering is all about handling collections of information. In Python, we do that with Lists. Today, I explored how to create, modify, and filter lists efficiently. 1️⃣ The Basics of Lists Lists are "containers" that hold multiple items. They are versatile because they allow duplicates and can hold different types of data (heterogeneous). Key Operations I practiced: Creation: l = [] Adding Data: append() vs extend() Modifying: Using insert(), remove(), and pop() Slicing: Grabbing specific chunks of data from the list. 2️⃣ List Comprehension (The Data Engineer's Shortcut) This was the highlight of Day 7. List comprehension allows us to create a new list by applying logic to an existing one in just one line of code. It’s cleaner, faster, and very "Pythonic." The Evolution of my Code: The "Long" Way (Using a For Loop): movies = ["singam", "sachin", "petta", "boys", "veeram", "vikram"] newmovies = [] for item in movies: if item != "boys": newmovies.append(item) The "Smart" Way (List Comprehension): # Does the same thing in one line! newmovies = [item for item in movies if item != "boys"] More Examples from Today's Lab: Creating a range: [num for num in range(11)] ➡️ [0, 1, ..., 10] Filtering even numbers: [num for num in range(11) if num % 2 == 0] ➡️ [0, 2, 4, 6, 8, 10] Searching within strings: [item for item in movies if "a" in item] Why this matters for DE: When we process millions of rows, writing efficient, readable code like list comprehensions makes our data pipelines much easier to maintain. #100DaysOfCode #DataEngineering #Python #ListComprehension #PythonPrograming #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Day 7/70 – Loops in Python (for & while) Today I learned about Loops in Python 🐍 Loops help us repeat tasks automatically. In data analytics, loops are used to process large datasets. 📌 1️⃣ For Loop Used to iterate over a sequence (like a list). numbers = [10, 20, 30, 40] for num in numbers: print(num) 👉 This prints each value one by one. 📌 2️⃣ Using range() for i in range(5): print(i) Output: 0 1 2 3 4 📌 3️⃣ While Loop Repeats until a condition becomes False. count = 1 while count <= 5: print(count) count += 1 📊 Data Analytics Example marks = [70, 80, 90, 60] total = 0 for mark in marks: total += mark average = total / len(marks) print("Average:", average) This is basic data calculation logic 🔥 💡 Why Loops Are Important? ✔ Processing large datasets ✔ Automating repetitive tasks ✔ Applying conditions to multiple records ✔ Foundation for Pandas operations #Day7 #Python #DataAnalytics #LearningInPublic #FutureDataAnalyst
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development