📶 Experiment 9: K-Nearest Neighbors (KNN) Algorithm using Python 📊 In this lab, I explored the K-Nearest Neighbors (KNN) algorithm — a simple yet powerful instance-based learning technique used for both classification and regression tasks. 🔍 Key learning outcomes: • Understanding the concept of distance-based classification • Implementing KNN using scikit-learn • Choosing the optimal value of K for better accuracy • Evaluating model performance using various metrics • Visualizing decision boundaries and classification outcomes This experiment deepened my understanding of how KNN leverages similarity between data points to make accurate predictions, emphasizing the importance of feature scaling and data normalization. 📁 Explore the repository here : 👉 https://lnkd.in/epWys7e7 #DataScience #MachineLearning #Python #KNN #ScikitLearn #Classification #DataAnalysis #PredictiveModeling #Statistics #LearningJourney #JupyterNotebook Ashish Sawant Sir
More Relevant Posts
-
🧠 Day 78 — Scikit-learn Base Jupyter Notebook Today I learned how to build a simple Machine Learning model using Scikit-learn in Jupyter Notebook. From loading data to saving the trained model — this covered the full ML workflow. I used the “tips” dataset, prepared the data, trained a Linear Regression model, made predictions, and evaluated it using MAE and R² Score. Finally, I saved the model using pickle for future use. This practice helped me understand the complete process of creating, testing, and saving an ML model in Python. ✨ #Day78 #MachineLearning #ScikitLearn #Python #DataScienceJourney #LearningEveryday
To view or add a comment, sign in
-
Iris Flower Classification using Machine Learning Excited to share my latest hands-on project where I trained and tested a Random Forest Classifier on the Iris dataset using Python and scikit-learn! 🔹 The first notebook focuses on quick model training and testing 🔹 The second notebook calculates and verifies accuracy This project highlights the end-to-end ML workflow — from data preprocessing to model evaluation. 💻 View the complete code and notebooks on my GitHub Repository here: https://lnkd.in/gtyUV7-Z #MachineLearning #Python #DataScience #ArtificialIntelligence #MLProjects #IrisDataset #ScikitLearn #RandomForest #OpenSource #GitHubProjects
To view or add a comment, sign in
-
🚀Experiment 4: Handling Missing Values in Data using Pandas 🐼 Missing data — one of the most common (and tricky!) challenges in any dataset. In this lab, I learned how to detect, understand, and treat missing values effectively using Python’s Pandas library. 🔍 What I explored: • Identifying missing data using functions like isnull() & notnull() • Cleaning data through imputation, removal, and replacement techniques • Understanding the impact of missing data on model performance This hands-on exercise helped me grasp how data preprocessing lays the foundation for reliable analysis and better machine learning outcomes. 📁 Explore the repository here : 👉 https://lnkd.in/epWys7e7 #DataScience #Python #Pandas #DataCleaning #MachineLearning #Statistics #JupyterNotebook Ashish Sawant sir
To view or add a comment, sign in
-
A mini project about Supervised Learning, applied it by predicting house prices using the California Housing Dataset from Kaggle. Tools: Python, Pandas, Scikit-learn, Matplotlib Steps: Cleaned and visualized the dataset Trained a Linear Regression model Evaluated using mean squared error and r2 score Achieved an RMSE of 69,297.72 and visualized predictions vs actual prices. GitHub: https://lnkd.in/d8CkpV_b #MachineLearning #DataScience #Python #LearningJourney #AI
To view or add a comment, sign in
-
-
⚙️ Experiment 10: Support Vector Machine (SVM) using Python 🤖 In this lab, I explored the Support Vector Machine (SVM) algorithm — one of the most robust and widely used supervised learning models for classification and regression tasks. 🔍 Key learning outcomes: • Understanding the concept of hyperplanes and margins in classification • Implementing SVM using scikit-learn • Exploring linear and non-linear (kernel-based) decision boundaries • Performing hyperparameter tuning for improved accuracy • Visualizing classification boundaries and model performance This experiment helped me understand how SVM achieves high accuracy and generalization by optimizing the decision boundary, making it ideal for complex real-world datasets. 📁 Explore the repository here : 👉 https://lnkd.in/epWys7e7 #DataScience #MachineLearning #Python #SVM #Classification #KernelMethods #PredictiveModeling #DataAnalysis #LearningJourney #JupyterNotebook Ashish Sawant sir
To view or add a comment, sign in
-
Day 11 – PYTHON VARIABLES 🧠🐍 (My Techrise cohort 2 journal) Today in my TechRise Cohort 2 journey, I learned about Python Variables — the building blocks of every program! Variables are like containers that hold data, and I explored different data types such as integers, floats, strings, booleans, and even complex numbers. I also practiced data type conversion in Python using simple code examples. Here’s a quick snippet from my learning: a = 10 k = float(a) p = complex(a) print(k) print(p) Every new lesson makes Python more exciting and practical for real-world AI and Machine Learning applications. 🚀 #TechRiseCohort2 #Python #AI #MachineLearning #CodingJourney #DigitalSkills
To view or add a comment, sign in
-
-
Today, I explored one of the most fundamental algorithms in Machine Learning — Linear Regression. I created a Jupyter Notebook where I implemented Linear Regression from scratch and also using Scikit-learn. Here’s what I covered: ✅Understanding the concept of Line of Best Fit ✅Exploring the relationship between independent and dependent variables ✅Visualizing data using Matplotlib ✅Training and testing the model using Scikit-learn This hands-on project really helped me understand how regression models make predictions based on data. Github :- https://lnkd.in/dTRMczDs 📘 Tools used: Python, NumPy, Pandas, Matplotlib, Scikit-learn #MachineLearning #LinearRegression #DataScience #Python #JupyterNotebook #LearningJourney
To view or add a comment, sign in
-
📘 Learning NumPy and Vectorization amazed me You know how in pure Python, say you want to square each number in a list, you have to loop through every element manually? That works — but it’s slow and repetitive. But with NumPy, you don’t loop over elements one by one. You apply the operation to the entire array at once as shown in the code snippet below ✅ Fewer lines of code ✅ Faster execution especially with large datasets ✅ More efficient and readable This simple concept really shows why NumPy is a foundation for data science and machine learning — performance matters when you're working with thousands or millions of values. Excited to keep learning 📈 #NumPy #Python #DataScience #Vectorization #MachineLearning #Day11 Moses O. Adewuyi. #15dayswritingconsistencywithmoses
To view or add a comment, sign in
-
-
🍁 Experiment 7: Simple Linear Regression using Python 🤖 In this lab, I explored the fundamentals of Simple Linear Regression, one of the most widely used techniques in predictive modeling. 🔍 Key learning outcomes: • Understanding the relationship between independent and dependent variables • Implementing linear regression using scikit-learn • Evaluating model performance using metrics like MSE and R² This experiment enhanced my understanding of how regression helps in predicting continuous outcomes and serves as a foundation for advanced machine learning algorithms. 📁 Explore the repository here : https://lnkd.in/epWys7e7 #DataScience #MachineLearning #Python #ScikitLearn #Statistics #DataAnalysis #PredictiveModeling #LinearRegression #LearningJourney #JupyterNotebook Ashish Sawant
To view or add a comment, sign in
-
This week, I built a Python program using NumPy to handle a classic matrix problem: ✅ Create a 2D NumPy array of size 5x4 ✅ Generate its transpose ✅ Calculate column-wise mean and row-wise standard deviation ✅ Compute the dot product of the original matrix with its transpose While it might look simple, this small project taught me how NumPy handles mathematical operations efficiently — and how much power a few lines of code can have when optimized correctly. Here’s what I enjoyed most: Understanding how matrix transposition works at the data structure level Seeing mean and standard deviation come to life across axes Watching the dot product reveal matrix relationships in a clean, vectorized way If you’re also learning data manipulation or Python fundamentals, I’d love to connect and discuss ideas! 💬 #Python #NumPy #DataScience #Programming #AI #LearningJourney #TechStudent
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development