🐍 Day 80 — Sampling and Population Day 80 of #python365ai 🧪 Population → entire dataset Sample → subset of data 📌 Why this matters: We usually analyse samples to infer properties of a population. 📘 Practice task: Take a small sample from a dataset and compute its mean. #python365ai #Sampling #Statistics #Python
Sampling vs Population in Python Statistics
More Relevant Posts
-
20 ML algorithms and their real-world use cases. One cheat sheet i wish i had when i started. I spent months confusing random forest with decision trees and had no clue when to use xgboost vs lightgbm. So i made this for myself. Save this and share this with someone who's into data analytics. #machinelearning #datascience #algorithms #python #dataanalyst
To view or add a comment, sign in
-
-
Excited to share my latest project: LinearRegression-ML This is a beginner-friendly Machine Learning project focused on understanding and implementing Linear Regression from scratch. It includes practical notebooks like profit analysis and medical data predictions, along with clear explanations of loss and cost functions. ???What I learned =>Fundamentals of Linear Regression =>Cost & loss function implementation =>Real-world dataset analysis using Python #https://lnkd.in/guCQQdNe #MachineLearning #Python_Jupyter_Notebook #DataScience
To view or add a comment, sign in
-
-
𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐜𝐨𝐫𝐫𝐞𝐥𝐚𝐭𝐢𝐨𝐧𝐬 𝐦𝐚𝐝𝐞 𝐝𝐚𝐭𝐚 𝐚𝐧𝐚𝐥𝐲𝐬𝐢𝐬 𝐦𝐨𝐫𝐞 𝐢𝐧𝐭𝐞𝐫𝐞𝐬𝐭𝐢𝐧𝐠 𝐟𝐨𝐫 𝐦𝐞 While exploring datasets in Python recently, I spent some time understanding how correlation works between variables. Using pandas, it’s surprisingly easy to calculate a correlation matrix and see how different columns relate to each other. Sometimes two variables move together strongly, and sometimes there’s almost no relationship at all. What I found interesting is that correlations can quickly highlight patterns that might not be obvious just by looking at raw numbers. Still learning how to interpret these relationships properly, but it’s definitely making the analysis process more insightful. #Python #Pandas #DataAnalytics
To view or add a comment, sign in
-
Today, I focused on working with NumPy arrays. Building a solid foundation for data manipulation and analysis. Here’s what I practiced: 🔹 Created a 1D array with values from 1 to 15 🔹 Built a 2D array (3×4) filled with ones 🔹 Generated a 3×3 identity matrix 🔹 Explored key array properties like shape, type, and dimensions 🔹 Converted a regular Python list into a NumPy array This session helped me better understand how data is structured and handled in numerical computing. Getting comfortable with arrays is definitely a crucial step toward more advanced data analysis and machine learning tasks. Looking forward to building on this momentum 💡 #AI #MachineLearning #Python #NumPy #DataAnalysis #M4ACE
To view or add a comment, sign in
-
-
Day 72. Spent time going deeper into XGBoost today. Covered classification and worked through the math: gradients & hessian leaf weights similarity score & gain Some questions I tried to answer while learning: Why do we need Taylor expansion here? Why can’t we directly differentiate the objective? What makes decision trees non-smooth / non-differentiable? The key realization: since trees produce piecewise constant outputs, the loss surface isn’t smooth — which is why second-order approximation becomes necessary. Still revising, but things are starting to connect. Notes: https://lnkd.in/gCqHUeK9 #MachineLearning #XGBoost #LearningInPublic #Python #DataScience
To view or add a comment, sign in
-
Stop scrolling if you’ve ever wondered how people actually predict the future with data. I’ve been learning ARIMA forecasting recently, and I mapped out a simple roadmap that made everything click for me. It starts with getting comfortable in Python - Pandas for wrangling, Matplotlib for visualising. Then you move into the core ideas: stationarity, ACF, PACF, and how they shape the model. After that, it’s about building the ARIMA model, validating it properly, and using it to make real‑world predictions. What I enjoy most is how it turns raw, messy data into insights you can genuinely act on. Still learning, but enjoying the process 🚀 #DataScience #TimeSeries #ARIMA #Python #LearningJourney
To view or add a comment, sign in
-
-
📊 Not everything in data science is a finished project most of it is exploration. This is a small snapshot from my Jupyter Notebook while working through a project. At this stage, it’s not about perfect results it’s about: • Understanding the data • Trying different approaches • Visualizing patterns • Making sense of what’s happening underneath What looks like simple code on the screen is actually a process of trial, error, and discovery. 💡 Key takeaway: Before insights come confusion. Before clarity comes experimentation. Every notebook is just a record of how thinking evolves through data. #DataScience #Python #JupyterNotebook #DataAnalytics #LearningInPublic
To view or add a comment, sign in
-
-
In the past years, foundation models have been extensively utilized in time series forecasting, with models like TimeGPT and TimesFM gaining significant attention. Kairos is a flexible and efficient foundation model designed to handle the dynamic and heterogeneous nature of real world data. The model was trained on the PreSTS corpus comprising of 300 billion time points from various domains. Kairos achieves excellent forecasting performance on the GIFT-Eval benchmark, while having significantly fewer parameters compared to other models. Check the link for more information and follow me for regular data science content! 𝗞𝗮𝗶𝗿𝗼𝘀 𝗼𝗳𝗳𝗶𝗰𝗮𝗹 𝘄𝗲𝗯𝘀𝗶𝘁𝗲: https://lnkd.in/dtxjtQvK 𝗟𝗲𝗮𝗿𝗻 𝗠𝗟 𝗮𝗻𝗱 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴: https://lnkd.in/dyByK4F #datascience #python #deeplearning #forecasting
To view or add a comment, sign in
-
-
Day 2 of learning Machine Learning. Today I worked on a simple linear regression model using Python in Jupyter Notebook. The idea was straightforward: - Input (x): house size - Output (y): price Model used: f(x) = wx + b I understood how: - Training data is structured (x_train, y_train) - Parameters (w, b) define the relationship - The model uses this to make predictions on new inputs Also got hands-on with NumPy and basic plotting using Matplotlib. Still very early, but it's becoming clearer how data is converted into predictions. #MachineLearning #AI #Python #LearningInPublic
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development