📊 𝗖𝗵𝗲𝗰𝗸 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝗩𝗮𝗹𝘂𝗲𝘀 𝗶𝗻 𝗗𝗮𝘁𝗮𝘀𝗲𝘁 Before building any ML model, always check for missing values ❗ Ignoring them can lead to poor results 😬 🔍➤ 1) Check total missing values (count) df.isna().sum() ➡️ Shows missing count per column 📊 📉 ➤ 2) Missing values percentage (in %) (df.isna().sum() / len(df)) * 100 ➡️ Helps decide whether to drop 🗑️ or fill(Imputation) 🧩 📊 𝗩𝗶𝘀𝘂𝗮𝗹𝗶𝘇𝗲 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝗩𝗮𝗹𝘂𝗲𝘀 📌 ➤ 1) Bar Chart df.isna().sum().plot(kind='bar', figsize=(15,4)) 🔥 ➤ 2) Heatmap import seaborn as sns import matplotlib.pyplot as plt plt.figure(figsize=(12,6)) sns.heatmap(df.isna(), cbar=False) plt.title("Missing Value Heatmap") plt.show() 🎨 Dark color (almost black / blue) → Value is NOT missing ✅ (data is present) ⚪ Light / white color → Value IS missing ❌ (NaN) 📑 𝗦𝘂𝗺𝗺𝗮𝗿𝘆 𝗧𝗮𝗯𝗹𝗲 (Clean Report) missing_report = pd.DataFrame({ "missing_count": df.isna().sum(), "missing_pct": df.isna().mean() * 100 }).sort_values(by="missing_pct", ascending=False) missing_report 🚀 Clean Data = Better Models 💯 Always handle missing values before training! #DataScience #MachineLearning #Python #DataAnalysis #GitHub #LearningJourney
Missing Value Analysis in Data Science with Python
More Relevant Posts
-
📊 Just wrapped up my Mastering Pandas series — a 4-part deep dive into the library every data professional relies on. If you're learning pandas or want a solid reference to come back to, this series covers the full workflow from raw data to insights: 🔹 Part 1 — Reading, Sorting & Displaying Data https://lnkd.in/dg2ujnKC 🔹 Part 2 — GroupBy & Indexing https://lnkd.in/d3SaX-vu 🔹 Part 3 — Data Cleaning & Merging/Joining https://lnkd.in/dZaabdui 🔹 Part 4 — Data Visualization with Matplotlib & Seaborn https://lnkd.in/dxyhPhPv Each article walks through the core properties and methods with clean examples, comparison tables, and the "why" behind each tool — not just the syntax. Whether you're just starting out or brushing up, I hope this helps 🙌 Feedback and thoughts are always welcome. #Pandas #Python #DataScience #DataAnalysis #MachineLearning
To view or add a comment, sign in
-
𝗜𝗳 𝘆𝗼𝘂 𝘄𝗼𝗿𝗸 𝘄𝗶𝘁𝗵 𝗱𝗮𝘁𝗮, 𝘆𝗼𝘂 𝗸𝗻𝗼𝘄 𝘁𝗵𝗶𝘀 — 𝗽𝗵𝗼𝗻𝗲 𝗻𝘂𝗺𝗯𝗲𝗿𝘀 𝗮𝗿𝗲 𝗻𝗲𝘃𝗲𝗿 𝗰𝗹𝗲𝗮𝗻 Sometimes they come with spaces, sometimes with country codes, sometimes with special characters like “+”, “-”, or even brackets. And sometimes, they even come with .00 at the end because of how data is stored or exported. And if we don’t clean them properly, it becomes very difficult to use that data for analysis or communication. In Pandas, cleaning phone number columns is actually simple once you understand the approach. First, I usually convert the column to string format. This avoids unexpected issues, especially when numbers are stored as integers, floats, or mixed types. After that, the main step is removing unwanted characters. Using regular expressions, we can keep only digits and remove everything else — including .00, symbols, and spaces. For example: df['phone'] = df['phone'].astype(str).str.replace(r'[^0-9]', '', regex=True) This one line can handle most messy formats. One important step I always follow is standardizing the final output. No matter how the number comes, I take only the last 10 digits. This helps remove country codes like +91 and keeps the data consistent. Something like: df['phone'] = df['phone'].str[-10:] Next comes validation. Not every cleaned number is valid. Some may be too short or too long. So I often filter numbers based on length to make sure we only keep meaningful data. If needed, I also format the numbers again in a clean and readable way. What I learned from this is simple — data cleaning is not about writing complex code, it’s about thinking clearly about the problem. Once the logic is clear, Pandas makes the job very easy. Small steps like this make a big difference when working with large datasets. #DataScience #DataAnalytics #Python #Pandas #DataCleaning
To view or add a comment, sign in
-
-
Day 19 — Merging & Joining Data in Pandas As I continue deepening my understanding of pandas, today’s focus was on something very practical: combining datasets. In real-world scenarios, data rarely comes in a single clean table. You often have multiple datasets that need to be brought together before any meaningful analysis can happen. That’s where pandas functions like merge(), join(), and concat() come in. Here’s a quick breakdown of what I learned: 🔹 merge() This is similar to SQL joins. It allows you to combine datasets based on a common column. You can perform: Inner joins Left joins Right joins Outer joins Example: pd.merge(df1, df2, on="id", how="inner") 🔹 join() Used mainly for combining DataFrames based on their index. It’s a bit more concise when working with indexed data. 🔹 concat() Used to stack DataFrames either: Vertically (adding more rows) Horizontally (adding more columns) Example: pd.concat([df1, df2], axis=0) 💡 Key Insight: Understanding when to use each method is crucial. Use merge() when working with relational data Use concat() when stacking data Use join() for index-based alignment This concept is especially important in data cleaning and preprocessing, where datasets often come from different sources. Each day, pandas feels less like a tool and more like a language for working with data. #M4aceLearningChallenge #Day19 #DataScience #MachineLearning #Python #Pandas #DataAnalysis
To view or add a comment, sign in
-
📊 What I Learned Today — Percentiles & Quantiles (Pandas) Today I fixed a confusion I had for a long time: 👉 Percentiles are NOT based on total sum — they’re based on position in sorted data. Key takeaways: 🔹 Quantile → value below which a % of data lies 🔹 Position formula: (n − 1) × q 🔹 Decimal position → interpolation 🔹 Result may not exist in dataset (and that’s okay) 💡 Example: Data → [10, 20, 30, 40] 75th percentile → position = (4−1)×0.75 = 2.25 So pandas doesn’t pick a value directly — it interpolates between 30 and 40 → 32.5 💡 Big insight: Even if the 75th percentile isn’t directly present, pandas computes it using values in between — not by summing anything. This cleared a major confusion: ❌ Percentage = sum-based ✅ Percentile = position-based Small concept, but a big clarity boost. Consistency > Perfection 🚀 #DataAnalytics #Pandas #Python #LearningJourney #InterviewPrep
To view or add a comment, sign in
-
Correlation tells you what moved together. Causal inference tells you what actually caused it. After this, you'll be able to estimate the true causal effect of any intervention : a promo, a product change, a policy shift - from observational data. No A/B test required. The technique: Propensity Score Matching (PSM) in Python. 𝗦𝘁𝗲𝗽 𝟭 :𝗜𝗻𝘀𝘁𝗮𝗹𝗹 ```bash pip install causalinference ``` 𝗦𝘁𝗲𝗽 𝟮 :𝗣𝗿𝗲𝗽𝗮𝗿𝗲 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮 You need three columns: outcome Y, binary treatment D, and confounders X. ```python import pandas as pd df = pd.read_csv("observational_data.csv") Y = df["revenue"].values D = df["received_promo"].values # 1 = treated, 0 = control X = df[["age", "tenure", "spend_last_90d"]].values ``` 𝗦𝘁𝗲𝗽 𝟯 : 𝗕𝘂𝗶𝗹𝗱 𝗮𝗻𝗱 𝗿𝘂𝗻 𝘁𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 ```python from causalinference import CausalModel model = CausalModel(Y, D, X) model.est_via_matching() print(model.estimates) ``` 𝗦𝘁𝗲𝗽 𝟰 : Read your results The key output is ATE (Average Treatment Effect) - the estimated causal lift, adjusted for selection bias. 📌 Always run `model.summary_stats` first. If treated and control groups don't overlap in propensity score distribution, your estimate is invalid — check covariate balance before trusting any number. The result: instead of "promo users had 23% higher revenue," you can say "the promo caused a £42 average revenue lift, controlling for age and prior spend." That's a claim your finance team can't easily dismiss. Have you applied causal inference in a real project? What's the hardest part to justify to non-technical stakeholders? #DataAnalytics #Data #Python #DataScience #Analytics #Statistics #CausalInference #BusinessIntelligence
To view or add a comment, sign in
-
-
❌ Still using loops in Pandas? ✅ Master these 30 functions → 10X faster analysis. 📥 LOADING: read_csv() | read_excel() 🔍 EXPLORATION: head() | info() | describe() | shape 🧹 CLEANING: dropna() | fillna() | drop_duplicates() ✨ TRANSFORM: rename() | astype() | apply() 📊 ANALYSIS: groupby() | pivot_table() | value_counts() | merge() 🎯 SELECTION: loc[] | iloc[] | query() 💡 QUICK EXAMPLE: ```python df = pd.read_csv('data.csv') df.dropna(inplace=True) df.groupby('Category')['Sales'].sum() ``` 🔥 MY FAVORITE: `groupby()` - Replaced 50 lines of loops with 1 line! ❓ What's YOUR go-to function? → groupby()? → apply()? → loc/iloc[]? Comment 👇 📥 **GET FREE CHEAT SHEET** Comment "PANDAS" or DM me --- 🔁 REPOST if Pandas saved you hours! 👍 Like for more Python tips 💬 Share your favorite function #Pandas #Python #DataAnalytics #Learning #CareerGrowth
To view or add a comment, sign in
-
-
Day 4 — Python for Analytics When I started, I wasted weeks learning things I never used. Here are the 5 libraries that actually move the needle: 🐼 1. Pandas — The backbone of data analysis import pandas as pd df = pd.read_csv("sales_data.csv") top_product = (df.groupby("product")["revenue"] .sum() .sort_values(ascending=False) .head(3)) print(top_product) If you learn nothing else — learn Pandas. 📊 2. Matplotlib / Seaborn — Turn numbers into stories Quick, beautiful charts with minimal code import seaborn as sns import matplotlib.pyplot as plt sns.lineplot(data=df, x="date", y="revenue") plt.title("Monthly Revenue Trend") plt.show() 🔢 3. NumPy — The engine under the hood Fast calculations on large datasets import numpy as np aov = np.mean(df["order_value"]) print(f"Average Order Value: ${aov:.2f}") 🤖 4. LangChain — Bridge between Python and LLMs Build GenAI workflows without starting from scratch from langchain_community.llms import OpenAI llm = OpenAI() response = llm("Summarize this sales report: ...") print(response) 📓 5. Jupyter Notebooks — Code + Story in one place Not just a coding tool — a communication format. Code → Output → Explanation → Chart All in one shareable document. Perfect for stakeholder walkthroughs. My honest learning path: Week 1 → Master Pandas Week 2 → Add Seaborn + Matplotlib Week 3 → Learn NumPy basics Week 4 → Explore LangChain Start with one. Build something real. Then add the next. #Python #Analytics #DataScience #Pandas #GenAI #30DayChallenge
To view or add a comment, sign in
-
🚀 🔥 𝑺𝒕𝒐𝒑 𝑺𝒕𝒓𝒖𝒈𝒈𝒍𝒊𝒏𝒈 𝒘𝒊𝒕𝒉 𝑫𝒊𝒓𝒕𝒚 𝑫𝒂𝒕𝒂 — 𝑴𝒂𝒔𝒕𝒆𝒓 𝑷𝒚𝒕𝒉𝒐𝒏 𝑫𝒂𝒕𝒂 𝑪𝒍𝒆𝒂𝒏𝒊𝒏𝒈 𝒊𝒏 𝑴𝒊𝒏𝒖𝒕𝒆𝒔 (2026) Most people learn Python… But fail at real data work ❌ Because they ignore ONE skill 👇 👉 Data Cleaning ⚡ Here’s your cheat sheet to become a PRO: 🧹 Fix Missing Data df.isnull().sum() df.fillna(method='ffill') df.dropna() 🧹 Remove Duplicates df.drop_duplicates() 🧹 Understand Your Data df.head() df.info() df.describe() 🧹 Clean Columns df.rename(columns={'old':'new'}) df.astype({'col':'int'}) 🧹 Filter Smartly df.query("salary > 50000") df[df['role'].isin(['DE','DS'])] 🧹 Merge Like a Pro pd.merge(df1, df2, on='id') df.groupby('team').agg({'salary':'mean'}) 🎯 Reality Check (2026): 👉 80% of time = Cleaning data 👉 20% of time = Analysis If your data is messy → your results are wrong ❌ 💬 Engagement Hook: Be honest — Do you enjoy data cleaning or hate it? 😅👇 #Python #Pandas #DataCleaning #DataEngineering #DataScience #MachineLearning #Analytics #LearnPython #TechCareers #Coding #BigData
To view or add a comment, sign in
-
-
𝗪𝗼𝗿𝗸𝗶𝗻𝗴 𝘄𝗶𝘁𝗵 𝗹𝗮𝗿𝗴𝗲 𝗱𝗮𝘁𝗮𝘀𝗲𝘁𝘀 𝗶𝗻 𝗣𝗮𝗻𝗱𝗮𝘀 𝘁𝗮𝘂𝗴𝗵𝘁 𝗺𝗲 𝗼𝗻𝗲 𝘀𝗶𝗺𝗽𝗹𝗲 𝗹𝗲𝘀𝘀𝗼𝗻 — 𝗺𝗲𝗺𝗼𝗿𝘆 𝗺𝗮𝘁𝘁𝗲𝗿𝘀 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝘄𝗲 𝘁𝗵𝗶𝗻𝗸. In the beginning, I used to load dataframes without even thinking about how much memory they consume. Everything looked fine… until one day my script slowed down, and sometimes even crashed. That’s when I realized it’s not always about the data size, it’s about how efficiently we handle it. One simple habit that changed things for me is checking memory usage of a dataframe. In Pandas, you can do this very easily: df.info() This gives a quick summary of your dataframe, including memory usage. But if you want a more detailed view, you can use: df.memory_usage(deep=True) This shows how much memory each column is using. Adding deep=True helps you get accurate results, especially for object-type columns like strings. What I found interesting is that sometimes a few columns consume most of the memory. Especially object columns they silently take up a lot of space. Once you know where the memory is going, you can start optimizing: * Convert object columns to category if they have repeated values * Use smaller data types like int32 instead of int64 * Drop unnecessary columns early These small steps make a big difference, especially when working with large datasets. For me, this was a small learning, but very powerful. Now, before doing any heavy operations, I just take a few seconds to check memory usage and it saves me minutes (sometimes hours) later. If you’re working with Pandas, give this a try. It might look small, but it can completely change how your code performs. #BigData #Python #Pandas #DataAnalytics
To view or add a comment, sign in
-
-
80% of analysis time is data cleaning. Here's the playbook. Nobody posts about this part. It's not glamorous. But it's where the real work happens. This free notebook covers: → Identifying missing values (isnull, info, patterns) → Visualizing missingness — is it random or systematic? → Imputation strategies: mean, median, mode, forward fill → When to drop vs when to impute (decision framework) → Finding duplicates (exact and fuzzy) → Deduplication: keep first, keep last, custom logic → Validating your cleaned dataset Real messy data. Not textbook-clean CSVs. The kind of data you'll actually encounter at work. Free: https://lnkd.in/gBG_CBqH Day 2/7. Yesterday was SQL. Tomorrow: Advanced Pandas. #DataCleaning #Python #Pandas #DataAnalyst #DataScience #DataQuality #FreeResources #DataAnalytics
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development