Data Science Practical – Central Tendency of Measures In this practical, I explored key statistical concepts including mean, median, and mode using Python and Jupyter Notebook. I applied NumPy arrays to perform calculations efficiently and visualized the results to better understand data distribution. This hands-on exercise helped me: Reinforce statistical theory with practical coding Improve data manipulation and visualization skills Gain experience in presenting data insights clearly Guided by: Ashish Sawant Check out the video walkthrough for a step-by-step demonstration of the notebook! #DataScience #Python #JupyterNotebook #Statistics #LearningByDoing #CollegeProject #HandsOn
More Relevant Posts
-
🎓 Data Science and Statistics Lab | Creation of Arrays in Python (NumPy) Sharing my screen recording from today’s lab session! 💻 In this practical, I learned how to create and manipulate arrays using the NumPy library — one of the core tools for scientific computing in Python. 🔍 Key topics covered: • Creating 1D, 2D, and 3D arrays • Using functions like array(), arange(), zeros(), ones(), and linspace() • Understanding array dimensions and shapes • Performing basic operations on arrays Arrays form the foundation for data manipulation and numerical analysis in Data Science. Excited to keep learning and building on these concepts! 🚀 GitHub Link : https://lnkd.in/eM9vBrBf Guidence by : Ashish Sawant #DataScience #Statistics #NumPy #Python #Array #DataScienceLab #MachineLearning #LearningByDoing
To view or add a comment, sign in
-
Entering week four of the Digital Skola Data Science Bootcamp with more advanced python concepts. This week's focus is on looping techniques (while, for, and nested loops), conditional statements and nested conditions, functional programming and pure functions, creating custom functions with proper scoping, string manipulation operations, and NumPy for numerical computing. NumPy has been the highlight learning to perform efficient mathematical operations on multidimensional arrays through reshape, flatten, transpose, advanced indexing, and broadcasting. These are essential tools for effective data preparation and analysis. Understanding array manipulation fundamentally changes how I approach data processing tasks. Detailed progress can be found in the attached slides. #DigitalSkola #LearningProgressReview #DataScience #Python #NumPy #DataAnalytics #BootcampJourney
To view or add a comment, sign in
-
#Week3 | Mastering NumPy for Data Science This week, I dove deep into the world of NumPy, the fundamental package for scientific computing in Python. It's amazing how powerful and efficient it is for numerical operations! This week was all about: - Practiced creating and manipulating multi-dimensional arrays. - Explored various array creation methods like `np.zeros`, `np.ones`, `np.linspace`, `np.arange`,etc. - Mastered indexing and slicing techniques to access and modify array elements. - Applied boolean indexing and broadcasting to perform complex operations concisely. Tech Stack / Tools Used: Python, NumPy, Jupyter Notebook Key Insights / Learnings: Broadcasting is a game-changer! It allows for writing vectorized and efficient code, avoiding explicit loops. Understanding array attributes and data types is crucial for memory optimization. This Week’s Plan: Next up, I'll be diving into Matplotlib to visualize all the data I'm now able to manipulate with NumPy. Project / Repo Link: https://lnkd.in/gP4esKV9 #AIJourney #MachineLearning #Python #DataScience #NumPy #LearningInPublic #12WeeksAIReset #ProgressPost
To view or add a comment, sign in
-
-
🚀 NumPy Project – From Basics to Real-World Insights! Excited to share my hands-on project built entirely with NumPy, where I explored how powerful numerical computing can simplify complex data tasks. 🔍 What I covered: • Understanding NumPy arrays and why they outperform Python lists • Array creation, slicing, indexing & reshaping • Mathematical, logical, and statistical operations • Performance comparison: Python lists vs NumPy • Applying NumPy to simple real-world data analysis scenarios This project helped strengthen my foundation in scientific computing and showcased how NumPy accelerates data workflows efficiently. A small step toward mastering data analysis and numerical computing in Python! #NumPy #Python #DataAnalysis #CodingJourney #LearningInPublic #TechSkills #ProjectShowcase
To view or add a comment, sign in
-
🚀 Data Science Journey — Session 3 (08/11/2025) Today’s session was all about exploring the core Data Structures in Python, which play a vital role in data storage, manipulation, and analysis. We covered: 📘 List – Ordered and mutable collection 📗 Tuple – Ordered but immutable 📙 Set – Unordered and unique elements 📒 Dictionary – Key-value pair data 💡 Array, Queue, Deque – Sequential data structures for efficient operations 📊 DataFrame & Series – Core components of the Pandas library for structured data handling Every concept helped me understand how efficiently data can be organized and processed an essential step toward mastering Data Science. #DataScience #Python #LearningJourney #DataStructures #Pandas #Coding #CareerGrowth
To view or add a comment, sign in
-
🎯 Day 54 of My Data Analytics Journey 📘 Today, I explored one of the core Python libraries for data handling — NumPy I learned about some powerful concepts that make data manipulation easy and efficient: 🔹 Reshaping — changing the structure of arrays to different dimensions 🔹 Resizing — adjusting array size dynamically 🔹 Stacking — combining multiple arrays together 🔹 Splitting — breaking arrays into smaller parts 🔹 Worked with 1D, 2D, and 3D arrays to understand how data transforms across dimensions GitHub:🔗 https://lnkd.in/ggHuzHSh 💡 Each topic helped me understand how NumPy forms the foundation for advanced data analysis and machine learning tasks. #DataAnalytics #NumPy #Python #LearningJourney #Day54 #DataScience #MachineLearning #ContinuousLearning #RamyaAnalyticsJourney
To view or add a comment, sign in
-
-
Hey everyone 👋 This week I studied NumPy, one of the most important Python libraries for working with numbers and data. At first, arrays felt a bit confusing 😅 — but once I got how they work, everything started clicking! Here’s what I explored this week 👇 Creating arrays with simple functions Checking array attributes (shape, dimensions, data type) Indexing and slicing to access specific parts Reshaping arrays into new forms Doing math operations easily without loops Big takeaway: NumPy is like the engine that powers data analysis in Python — it makes everything faster and more efficient! My Quick Notes np.array() → Create a new NumPy array np.arange(6) → Generate numbers from 0 to 5 arr.shape → Shows the number of rows & columns arr.ndim → Tells how many dimensions the array has arr.dtype → Shows the data type (e.g. int, float) arr[0] → Access the first element arr[1:] → Slice from index 1 to the end arr.reshape(2, 3) → Change the array shape arr * 2 → Multiply every element by 2 Next week, I’m jumping into Pandas to work with real datasets — can’t wait! #Python #NumPy #DataScience #LearningJourney #SelfTaught #100DaysOfCode
To view or add a comment, sign in
-
📊 Data Science Lab Experiment – Data Acquisition with Pandas In this practical, I explored how to acquire and manage datasets using the Pandas library in Python. Learned to read, load, and preprocess data from multiple sources — a key step before any analysis or model building. It was a great learning experience that deepened my understanding of data handling in real-world scenarios. 💡 🔗 GitHub: [https://lnkd.in/dFff8cPb] 👨🏫 Guided by : Ashish Sawant #DataScience #Python #Pandas #MachineLearning #StudentProject #DataAcquisition
To view or add a comment, sign in
-
📙 Experiment no.2:Central Tendency of Measures – Mean, Median, and Mode In this experiment, I explored the core statistical concepts of mean, median, and mode to understand how they represent the central tendency of a dataset. Through practical implementation using Python, I learned how these measures provide insights into the distribution and balance of data — essential in data analysis and decision-making. 💡 Key Learnings: 💠 Gained hands-on experience in calculating mean, median, and mode programmatically. 💠Understood the impact of skewed data on each measure of central tendency. 💠Learned to visualize and interpret datasets using statistical methods. 💠Strengthened the foundation for advanced data science and analytics applications. 👨🏫 Guided by: Sir Ashish Sawant 🔗 Check out the repository here: https://lnkd.in/eqkNZ-BD #DataScience #Statistics #MachineLearning #Python #CentralTendency #DataAnalysis #MeanMedianMode #LearningByDoing #GitHub #StudentProject #GuidedLearning
To view or add a comment, sign in
-
🔍 Real-world data is messy — and NumPy makes cleaning it easy! Example: Replace missing values with the column mean import numpy as np data = np.array([10, 20, np.nan, 40]) data = np.where(np.isnan(data), np.nanmean(data), data) print(data) Output → [10. 20. 23.333 40.] 💡 NumPy isn’t just math — it’s a data-cleaning superhero. #NumPy #Python #DataCleaning #DataScience #MachineLearning #CodingBlockHisar #Hisar
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development