I’m excited to share my first-ever deployed machine learning project: a Heart Disease Prediction App. It’s been a rewarding challenge to move beyond notebooks and build a functional tool that is fun to use. The Tech Stack: Model: K-Nearest Neighbors (KNN) built with Scikit-Learn. Interface: Streamlit for the front-end and deployment. Data: End-to-end pipeline including cleaning and feature scaling. Try the app here: https://lnkd.in/gW8qFtmN #MachineLearning #DataScience #Python #Streamlit #FirstProject
Heart Disease Prediction App with KNN and Streamlit
More Relevant Posts
-
Recently, I worked on a small machine learning project on Fitness Class Attendance Prediction. The goal was to predict whether a member would attend a class or not, using a complete workflow from raw data to final model evaluation. The project included: cleaning inconsistent data formats handling missing values encoding categorical variables preparing preprocessing pipelines training and comparing multiple models I tested: KNN, Decision Tree, SVM, and Naive Bayes What I found interesting was that the “best” model depended on how performance was judged: Naive Bayes gave the best F1-score on the main split SVM gave the highest accuracy Decision Tree looked like the most stable option when the test size changed A good reminder that model selection should not depend on one metric only. Github Repo: https://lnkd.in/d8_ADgY5 Projects like this keep showing me how important it is to combine clean data, correct preprocessing, and thoughtful evaluation to reach a solid conclusion. #MachineLearning #DataAnalytics #Python #ScikitLearn #ClassificationModels #DataScienceProjects
To view or add a comment, sign in
-
-
🚀 Turning Learning into Building! After learning Supervised Machine Learning, I have built a real-world project — a Heart Disease Prediction System. 🔍 This project predicts the risk of heart disease based on user input and provides real-time results through an interactive web app. 🛠️ Tech Stack: • Python • Scikit-learn • Pandas • Streamlit ✨ What I learned: • Data preprocessing & feature engineering • Model building (KNN) • Working with real-world datasets • Deploying ML models using Streamlit This project marks an important step in my journey as I move forward into more advanced Machine Learning concepts. 🚀 Live Demo: https://lnkd.in/dyVvWQrr Feel free to explore and share your feedback! github : https://lnkd.in/dRxQBYmZ #MachineLearning #Python #DataScience #Streamlit #LearningJourney #BuildInPublic
To view or add a comment, sign in
-
Improved my model on the Titanic dataset 🚢 (Kaggle) What I initially thought was a “beginner-friendly dataset” turned out to be a great lesson in how small improvements can make a big difference. After my first submission, I went back and focused on improving the basics: • Better usage of previously removed columns. • Feature engineering (like extracting useful info from existing columns and mapping categorical values to numbers correctly) • Tried refining the model instead of just switching algorithms, refined random forest here using some inner functions definition And the result? Noticeable improvement in performance 📈 Biggest takeaway: It’s not always about using complex models—understanding your data and refining it step by step matters more. Have a great day guys, please be safe, healthy and try to grind more and more and just do your best. bye 😊 Still learning, still experimenting, and still improving 🚀 #Kaggle #DataScience #MachineLearning #Python #DataAnalytics #FeatureEngineering #ModelImprovement #DataPreprocessing #LearningJourney #GrowthMindset #DataScienceProjects #BuildInPublic
To view or add a comment, sign in
-
-
Inspired by Ron Kohavi and Evan Miller practical experimentation principles and tooling, I built a simple A/B Test Sample Size Calculator with Plotly Dash + statsmodels. Repo: https://lnkd.in/gkNvnkin Built quickly through vibe coding, but grounded in solid stats: fixed-horizon two-sample z-test for proportions, alpha/power controls, effect-size views, and runtime estimation. After years in experimentation, this automates a planning task I’ve repeated across various products/domains. An example of what experience + fast execution with AI assistance can produce. #ABTesting #Experimentation #ProductAnalytics #Growth #DataScience #OpenSource #Python #Plotly #Dash #Statsmodels #VibeCoding
To view or add a comment, sign in
-
-
Theory is great, but building is better. 🛠️ Let’s code some machine learning models! Class 2 of my AI & ML series is officially live on YouTube. In this session, we tackle the bread and butter of data science: Python, Pandas, Scikit-Learn, and Google Colab. 🔗 Link to the video in the comments below! By the end of the video, you won't just know the syntax you will have built two real-world projects: 🌸 An Iris Flower Classifier using K-Nearest Neighbors (KNN) 📈 A Study Hours predictor using Linear Regression If you want to transition into AI or just want to understand how these models actually work under the hood, check out the full session below! Don't forget to grab the free Colab notebook in the description so you can code along with me. #MachineLearning #Python #DataScience #ArtificialIntelligence #GoogleColab #LearnToCode
To view or add a comment, sign in
-
-
🚀 Day 45 of My Learning Journey – NumPy Shape & Reshape Today, I explored how to work with array dimensions using NumPy, focusing on shape and reshape. 🔹 Key Learnings: ✔️ shape Helps to identify the dimensions of an array Example: (3, 2) → 3 rows and 2 columns ✔️ Modifying shape We can directly change the structure of an array Useful when reorganizing data ✔️ reshape() Creates a new array with a different shape Does NOT modify the original array Very helpful in data preprocessing 🔹 Hands-on Task Completed: Converted a list of 9 elements into a 3×3 matrix using NumPy. 💡 Takeaway: Understanding how to manipulate array dimensions is essential for data analysis, machine learning, and efficient problem-solving. 📌 Every small concept builds a stronger foundation! #Day45 #Python #NumPy #LearningJourney #DataScience #Coding #StudentLife
To view or add a comment, sign in
-
-
Most beginner datasets are either too simple… or too unrealistic. So I decided to create one. I built a dataset around student life — study habits, sleep patterns, social media usage, and exam performance. 👉 https://lnkd.in/ejXkATfr The idea was simple: make something beginner-friendly, but still useful for real analysis. You can: • Predict exam scores • Explore behavior patterns • Build and test ML models No complex setup. Just clean, usable data. I’ve also added a small challenge — build something with it and share your results. Curious to see what you create. #DataScience #MachineLearning #Kaggle #Python #DataProjects #LearningJourney #BeginnerFriendly
To view or add a comment, sign in
-
-
📊 Project Showcase: Student Performance Predictor Developed a machine learning model to predict student academic performance using features like study time, absences, and parental support. 🔧 Implementation: • KNN Algorithm • Data preprocessing & scaling • Model deployment using Flask • Frontend integration with React This project demonstrates end-to-end ML workflow from data to deployment. 🔗 GitHub Repository: https://lnkd.in/dkwmXV-n #DataScience #MachineLearning #AI #Python #ProjectShowcase
To view or add a comment, sign in
-
Day 19 of my Data Science journey and I finally stopped Googling the same sklearn functions every single day. Here's the truth nobody tells you when you start: You don't need 10 different libraries to build a complete ML pipeline. You need ONE. scikit-learn does it ALL :- -> Preprocessing your messy data -> Splitting train/test sets -> Training 20+ algorithms (classification, regression, clustering) -> Evaluating your model with the right metrics -> Tuning hyperparameters without data leakage -> Packaging the whole thing into one Pipeline object And the best part? Every step follows the same 3-method pattern: .fit() → .transform() → .predict() Learn that. Everything else is just syntax. I built this straight from the official Scikit-learn docs so every function, every method, every example is production accurate. Save it 👇 #100DaysOfCode #DataScience #MachineLearning #ScikitLearn #Python #MLEngineer #DataScienceJourney #LearningInPublic #Day19
To view or add a comment, sign in
-
-
Excited to share one of my portfolio projects: an AI-Powered Medical Symptom Checker built with Python, scikit-learn, and Gradio. This app allows users to select symptoms and get: Top 3 likely disease predictions Confidence scores Basic precautions Safety disclaimer Clean professional UI with dark mode I built this project to strengthen my skills in machine learning, data preprocessing, model integration, and user-focused interface design. GitHub: https://lnkd.in/gCZkUFuj #Python #MachineLearning #AI #Gradio #ScikitLearn #HealthcareAI #GitHub #PortfolioProject
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development