I'm committing to building popular ML algorithms from scratch daily without using anything but Python built-ins and NumPy. No sklearn. No shortcuts. Just pure code and first principles. Day 3: Logistic Regression ✅ Logistic Regression intuition is simple: imagine you're trying to decide whether an email is spam or not. You can't just draw a straight line and predict a number, you need a probability between 0 and 1. That's where the Sigmoid function comes in. It takes any number and squashes it into a value between 0 and 1. Feed it the output of a linear model, and suddenly you have a probability. Cross a threshold say 0.5, and you have a class label. Same gradient descent as Linear Regression. Just a Sigmoid on top. This is fully open if you want to collaborate, add an algorithm, or drop a suggestion in the comments or issues tab. Feel free to do so. 🤝 👉 GitHub: https://lnkd.in/duTd7jie #MachineLearning #Python #NumPy #DataScience #OpenSource #LearnML #100DaysOfCode #LogisticRegression #Classification
Building Logistic Regression from Scratch with Python and NumPy
More Relevant Posts
-
I'm committing to building popular ML algorithms from scratch daily without using anything but Python built-ins and NumPy. No sklearn. No shortcuts. Just pure code and first principles. Day 2: Linear Regression ✅ Linear Regression intuition is simple: imagine you're trying to draw the best possible straight line through a scatter of points on a graph. That line represents the relationship between your input and output. But how do we find the "best" line? That's where Gradient Descent comes in. We start with a random line, measure how wrong it is using the Mean Squared Error, then slowly nudge the line in the direction that reduces the error, repeating this thousands of times until we converge. This is fully open if you want to collaborate, add an algorithm, or drop a suggestion in the comments or issues tab. Feel free to do so. 🤝 👉 GitHub: https://lnkd.in/duTd7jie #MachineLearning #Python #NumPy #DataScience #OpenSource #LearnML #100DaysOfCode #LinearRegression #GradientDescent
To view or add a comment, sign in
-
-
I'm committing to building popular ML algorithms from scratch daily without using anything but Python built-ins and NumPy. No sklearn. No shortcuts. Just pure code and first principles. Day 5: Support Vector Machine (SVM) ✅ SVM intuition is simple: imagine you have two groups of points on a 2D plane and you want to draw a line that separates them. But not just any line, the one with the biggest gap between the two groups. That gap is called the margin. And the data points sitting right on the edge of that margin are the Support Vectors, the only points that actually define where the line goes. Remove any other point, and the line stays the same. SVM finds the maximum margin boundary using the Hinge Loss and gradient descent, penalizing points that end up on the wrong side. This is fully open if you want to collaborate, add an algorithm, or drop a suggestion in the comments or issues tab. Feel free to do so. 🤝 👉 GitHub: https://lnkd.in/duTd7jie #MachineLearning #Python #NumPy #DataScience #OpenSource #LearnML #100DaysOfCode #SVM #SupportVectorMachine #Classification
To view or add a comment, sign in
-
-
🎉 Setting the ball rolling in Python & Machine Learning! Kicking off my journey by building a Student Perfomance Prediction App using the UCI dataset 📚 - built with Python & Streamlit for an interactive experience. One thing I learned? You can't rely on just one model. So I trained and compared multiple models: 🔹 Linear Regression 🔹 SVM 🔹 Random Forest 🔹 Gradient Boosting 🔹 XGBoost 🔹 LightGBM Now the big question — how did I evaluate them? 🤔 Here comes 📊 R² Score and 📉 Mean Squared Error (MSE). And the best performer was… Gradient Boosting 🏆 But wait… should users just accept predictions without knowing why? 👀 They do deserve transparency. That's where the explainability heroes step in : 🦸♂️ SHAP – to understand overall feature impact 🦸♀️ LIME – to explain individual predictions 🔗 GitHub Repository: https://lnkd.in/gQSRS9iG Even though it’s a simple application, it helped me understand model training, evaluation, ensemble learning, and most importantly — making ML explainable. Long way to go. Just getting started 🔥 #MachineLearning #Python #ExplainableAI #SHAP #LIME #DataScience #LearningJourney
To view or add a comment, sign in
-
Python Tip — And the One Step Before ML The Pandas function I use most that beginners overlook: .value_counts(normalize=True) Instead of raw counts, you get proportions instantly. No extra division. No extra column. But here's why it really matters for ML work: Before you train any model, you need to understand your class distribution. If 95% of your data is label A and 5% is label B, your model will look 95% "accurate" while completely ignoring the thing you actually care about. .value_counts(normalize=True) is usually one of the first things I run on any new dataset. It's a 2-second check that can save you from building a model on a broken foundation. EDA (exploratory data analysis) isn't glamorous. But skipping it is how AI projects fail quietly. #Python #Pandas #MachineLearning #DataScience #EDA
To view or add a comment, sign in
-
I'm committing to building popular ML algorithms from scratch daily without using anything but Python built-ins and NumPy. No sklearn. No shortcuts. Just pure code and first principles. Day 4: Naive Bayes ✅ Naive Bayes intuition is simple: imagine you receive an email with the words "free", "win", and "prize". What's the probability it's spam? That's exactly what Naive Bayes does. It uses Bayes Theorem to calculate the probability of each class given the input features, and picks the most likely one. The "Naive" part? It assumes all features are independent of each other. That's rarely true in real life, but surprisingly, it still works really well. This is fully open if you want to collaborate, add an algorithm, or drop a suggestion in the comments or issues tab. Feel free to do so. 🤝 👉 GitHub: https://lnkd.in/duTd7jie #MachineLearning #Python #NumPy #DataScience #OpenSource #LearnML #100DaysOfCode #NaiveBayes #Classification
To view or add a comment, sign in
-
-
🚀 Exploring Machine Learning with Linear Regression Today I practiced a simple Machine Learning model using Python and Scikit-learn. I implemented Linear Regression to predict prices based on area values. Using Pandas for data handling and Scikit-learn’s LinearRegression, I trained a model with historical data and predicted the price for a new area value (10,000 sq.ft). This small exercise helped me understand: • Data loading using Pandas • Feature selection (dropping target column) • Training a Linear Regression model • Making predictions on new data Step by step, improving my understanding of Machine Learning fundamentals and predictive modeling. #MachineLearning #Python #LinearRegression #DataScience #ScikitLearn #DataAnalytics #LearningJourney
To view or add a comment, sign in
-
-
Transforming Categorical Data for Machine Learning 🔄📊 Continuing my Machine Learning learning journey, today I explored One-Hot Encoding, an essential technique used to convert categorical data into numerical format so that machine learning algorithms can process it effectively. Today I implemented One-Hot Encoding using Python and explored how each category is converted into separate binary columns (0s and 1s). For example: Gender_Male → 1 or 0 Gender_Female → 1 or 0 I also explored the Dummy Variable Trap and how using drop='first' helps avoid multicollinearity by removing redundant columns while still preserving the necessary information. Tools used in this exercise: • Python • Pandas • NumPy • Scikit-Learn (OneHotEncoder) • Jupyter Notebook 🖇️GitHub Repository: https://lnkd.in/gXa9zEBs #MachineLearning #DataScience #Python #DataPreprocessing #OneHotEncoding #Pandas #ScikitLearn #LearningJourney
To view or add a comment, sign in
-
𝗖𝗿𝗲𝗮𝘁𝗲 𝗘𝘅𝗽𝗹𝗮𝗶𝗻𝗮𝗯𝗹𝗲 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝗠𝗼𝗱𝗲𝗹𝘀! Machine learning models usually have low explainability, hence making it difficult to understand their predictions. This is a serious obstacle in many cases, including industries where black box models are unacceptable. Shapash is a Python library that lets you understand machine learning models by providing an interactive web dashboard. Furthermore, shapash can also be used to generate reports, hence being a significantly useful tool for data scientists and analysts! Visit the link below for more information, and make sure to follow me for regular data science content. 𝗦𝗵𝗮𝗽𝗮𝘀𝗵 𝗹𝗶𝗯𝗿𝗮𝗿𝘆 𝘄𝗲𝗯𝘀𝗶𝘁𝗲: https://lnkd.in/dDiid5Vj 𝗟𝗲𝗮𝗿𝗻 𝗠𝗟 𝗮𝗻𝗱 𝗙𝗼𝗿𝗲𝗰𝗮𝘀𝘁𝗶𝗻𝗴: https://lnkd.in/dyByK4F #datascience #python #deeplearning #machinelearning
To view or add a comment, sign in
-
-
Python is still the king of data + AI — but it’s not just about pandas anymore. These tools changed how I build in 2026: 🔸 Polars – 10× faster than pandas, lower RAM. Built on Rust. A no-brainer for big data. 🔸 FastAPI + Pydantic – Blazing fast APIs, auto-validating schemas, and async support. 🔸 Rich + Typer – Want beautiful CLIs? You need these. 🔸 Ruff + Black + Pyrefly – Lint, format, and type-check at warp speed. ⚡ Bonus: Tools like uv and RightTyper make env and typing management effortless. 👉 Python’s ecosystem isn’t just powerful — it’s lightning fast now. 💬 What’s one Python tool you discovered recently that you now can’t live without? #Python #DataScience #DevTools #FastAPI #OpenSource #Productivity
To view or add a comment, sign in
-
-
🚀 Built a Machine Learning model that predicts house prices. Most people stay stuck in tutorials. I decided to apply it. Used Linear Regression to train on real housing data, evaluated performance, and saved the model for reuse. 📊 Results: • R² Score: 0.58 • MSE: 0.56 Not perfect, but real learning happens here building, testing, improving. Pushed the complete project to GitHub 💻 #BuildInPublic #MachineLearning #AIJourney #Python #DataScience #Consistency #KeepLearning
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development