🚀 Day 3 – #Daily_DataScience_Code Taking the next step in our data science journey 👩💻 Today, we move beyond CSV files and explore how to read Excel files with multiple sheets 📊 💻 What we did today: - Loaded an Excel file directly from the web 🌐 - Read all sheets at once using pandas - Retrieved available sheet names - Accessed a specific sheet using its name (not index) - Displayed the first rows using head() 🎯 Key Insight: When working with Excel files, using sheet names makes your code more robust and readable, especially when dealing with multiple datasets. Let’s keep building step by step 🚀 #DataScience #MachineLearning #Python #AI #DataHandling #LearnByDoing #DataScienceWithDrGehad #DailyDataScienceCode
Gehad AlKady’s Post
More Relevant Posts
-
🚀 Recently I’ve been diving deeper into the world of Data Science & Machine Learning! I’ve explored some powerful Python libraries that are essential for data analysis and visualization: 🔹 NumPy – for numerical computing 🔹 Pandas – for data manipulation & analysis 🔹 Matplotlib – for data visualization 🔹 Seaborn – for advanced and attractive visualizations Step by step, I’m building a strong foundation in ML and continuously improving my problem-solving skills. 📌 Check out my learning progress and resources here: https://lnkd.in/gUHRnfwP #MachineLearning #DataScience #Python #NumPy #Pandas #Matplotlib #Seaborn #LearningJourney #CSE
To view or add a comment, sign in
-
-
Pandas vs NumPy — Most beginners use Pandas for everything. But that's a mistake. Here's the truth: → Pandas = tabular data, cleaning, filtering, groupby operations → NumPy = numerical arrays, matrix math, high-speed computations → Pandas is actually built ON TOP of NumPy Knowing when to use which saves you hours of slow, inefficient code. If you're doing data wrangling and EDA → use Pandas If you're doing math-heavy operations or feeding data into ML models → use NumPy The best data scientists use both together fluently. Which one did you learn first? Drop it in the comments 👇 #DataScience #Python #Pandas #NumPy #DataAnalytics #MachineLearning #PythonProgramming #DataEngineering Skillcure Academy Akhilendra Chouhan Radhika Yadav Sanjana Singh
To view or add a comment, sign in
-
-
Data Science tech stack 2020: - pandas - sklearn - matplotlib Data Science tech stack 2026: - pandas (legacy support) - polars (the cool kid) - sklearn - xgboost - lightgbm - shap - langchain - llamaindex - pydantic-ai - weave - mlflow - dvc - optuna - great expectations - prefect - fastapi - streamlit - gradio You don't need all of them. You need the 3-4 that solve YOUR problem. Tag someone still trying to learn every tool. Overwhelmed? Our roadmaps tell you which 3-4 tools per role, in order to learn them: https://lnkd.in/ga9TFJh5 #DataScience #Python #TechStack #MachineLearning #DataEngineering #MLOps #DataHumor #Memes
To view or add a comment, sign in
-
-
Day 82 - Relational Plots & Time Series analysis 🚀 Continuing my journey into data visualization, today I focused on understanding relationships in data and extracting insights from time-based patterns using Python. Here’s what I explored: 📊 Scatter Plot with Marginal Histograms Visualizing relationships along with distributions gave a much richer context than a standalone scatter plot. 📈 Line Plot with Seaborn Improved how I represent trends with cleaner, more intuitive visualizations using Seaborn. ⏳ Time Series Plot with Seaborn & Pandas Worked with time-indexed data to uncover patterns and trends over time — a key skill in real-world analytics. 📉 Time Series with Rolling Average Smoothing noisy data using rolling averages helped reveal the underlying trend more clearly. 💡 Key takeaway: Effective visualization isn’t just about charts — it’s about telling a clear story with data. #DataScience #Python #Seaborn #Pandas #DataVisualization #TimeSeries #Analytics
To view or add a comment, sign in
-
-
📊 Exploring Data with the Iris DatasetRecently, I worked on a simple yet insightful data visualization task using the famous Iris dataset. This exercise helped me strengthen my understanding of data analysis fundamentals. 🔹 Loaded and explored the dataset using pandas 🔹 Analyzed structure with shape, columns, and summary statistics 🔹 Created visualizations using matplotlib & seaborn: ✔️ Scatter plot to study relationships ✔️ Histogram to understand distribution ✔️ Box plot to identify outliers This task enhanced my skills in data exploration and visualization, which are essential for any data science workflow. #DataScience #Python #DataVisualization #Pandas #Seaborn #Matplotlib #MachineLearning #LearningJourney DevelopersHub Corporation©
To view or add a comment, sign in
-
-
🚀 Day 6: Getting Started with NumPy Continuing my journey to become an AI Developer, today I explored one of the most important libraries for data science and machine learning 👇 📘 Day 6: NumPy Basics Here’s what I covered today: 🔢 NumPy Arrays ✅ Created 1D arrays from Python lists ✅ Understood multidimensional (2D) arrays and their structure 📐 Array Operations ✅ Learned array indexing and slicing techniques ✅ Used .shape to understand dimensions ⚙️ Array Manipulation ✅ Reshaped arrays using .reshape() ✅ Generated sequences using np.arange() 🧪 Built-in Functions ✅ Used np.ones() and np.zeros() ✅ Explored random functions like np.random.rand() and np.random.randn() 💡 Key Learning: NumPy makes data handling faster and more efficient, and it forms the foundation for machine learning and deep learning. 🎯 Next Step: Practice more problems on NumPy and start exploring data manipulation in real-world scenarios Consistency is the key 🚀 #Day6 #Python #NumPy #AIDeveloper #DataScience #CodingJourney #LearningInPublic
To view or add a comment, sign in
-
-
Starting to understand why Pandas is the first tool every data scientist learns. ● I built a simple Student Marks Analyzer — nothing fancy, but it clicked something for me. With just a few lines I could: → Build a table from scratch → Explore rows, columns, specific values → Get average, highest and lowest marks instantly ● Average: 84.0 | Highest: 95 | Lowest: 70 The interesting part? I didn't write a single formula. No Excel. No manual counting. Just Python doing the heavy lifting in milliseconds. This is exactly what data analysis feels like at the start — small project, but you can already see the power behind it. Still a lot to learn. But this one felt good. 🐼 ● Code is on my GitHub — link in the first comment. #Python #Pandas #DataScience #MachineLearning #AI #100DaysOfCode #PakistanTech
To view or add a comment, sign in
-
-
🚀 Day 55 of My 90-Day Data Science Challenge Today I worked on Optimizers in Machine Learning (Gradient Descent). 📊 Business Question: How can we efficiently minimize the loss function to improve model performance? Optimizers help update model parameters to reduce error step by step. Using Python concepts: • Learned Gradient Descent • Understood Learning Rate • Explored Batch Gradient Descent • Learned Stochastic Gradient Descent (SGD) • Compared optimization techniques 📈 Key Understanding: Optimizers control how quickly and effectively a model learns. 💡 Insight: A proper learning rate is crucial — too high may overshoot, too low slows learning. 🎯 Takeaway: Efficient optimization leads to faster and better model training. Day 55 complete ✅ Optimizing model learning 🚀 #DataScience #MachineLearning #DeepLearning #GradientDescent #Optimization #Python #LearningInPublic #90DaysChallenge
To view or add a comment, sign in
-
-
📅 Day 3 – AI/ML Journey (Pandas Basics) Today I started working with Pandas, one of the most important libraries in Python for data analysis. 🔹 What I learned: • Reading datasets using read_csv() and read_excel() • Understanding the difference between CSV and Excel formats • Viewing data using .head() • Handling real-world messy data (missing values, wrong headers) • Debugging common errors while loading datasets ⚠️ Biggest lesson today: Data is never clean in real projects — most of the work is in understanding and preparing it. Still learning and improving step by step 🚀 #Day3 #AI #MachineLearning #Pandas #Python #DataScience #LearningInPublic #DeveloperJourney
To view or add a comment, sign in
-
-
Day 19 of my Data Science journey and I finally stopped Googling the same sklearn functions every single day. Here's the truth nobody tells you when you start: You don't need 10 different libraries to build a complete ML pipeline. You need ONE. scikit-learn does it ALL :- -> Preprocessing your messy data -> Splitting train/test sets -> Training 20+ algorithms (classification, regression, clustering) -> Evaluating your model with the right metrics -> Tuning hyperparameters without data leakage -> Packaging the whole thing into one Pipeline object And the best part? Every step follows the same 3-method pattern: .fit() → .transform() → .predict() Learn that. Everything else is just syntax. I built this straight from the official Scikit-learn docs so every function, every method, every example is production accurate. Save it 👇 #100DaysOfCode #DataScience #MachineLearning #ScikitLearn #Python #MLEngineer #DataScienceJourney #LearningInPublic #Day19
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development