Built a Machine Learning API using FastAPI I developed a machine learning-based API that predicts salary based on user input level. My all project and machine learning model based API on github. GitHub : https://lnkd.in/gR_qsxwM 🔹 Implemented Machine Learning algorithms and integrated them with FastAPI 🔹 Enabled real-time prediction using API based on user input 🔹 Designed RESTful endpoints for seamless interaction 🔹 Stored and retrieved prediction data dynamically 💡 This project demonstrates how ML models can be deployed and used through APIs in real-world applications. Tech Stack: Python, FastAPI, scikit-learn #MachineLearning #FastAPI #Python #DataScience #AI #BackendDevelopment #MLProjects
Machine Learning API with FastAPI and scikit-learn
More Relevant Posts
-
Most people jump directly into Machine Learning models. I almost did the same. But then I realized something: Without strong fundamentals, everything in ML becomes confusing. So instead of rushing into algorithms, I’m currently focusing on: • Data Structures & Algorithms (for problem-solving) • Probability & Statistics (to actually understand models) • Python fundamentals (clean implementation matters) Because in the long run: Understanding why something works is more powerful than just knowing how to use it. Now I’m building my learning step by step — and documenting it along the way. Curious to know — how did you approach learning ML? #DataScience #MachineLearning #Python #DSA #LearningInPublic
To view or add a comment, sign in
-
Scikit-Learn Cheat Sheet Every ML Beginner Must Save If you’re learning Machine Learning with Python, mastering Scikit-Learn is non-negotiable. It’s one of the most widely used libraries for building, training, and evaluating ML models. Here’s a quick cheat sheet covering the most commonly used functions 👇 Data Splitting --> Used for splitting your dataset into training and testing sets and performing robust validation. Preprocessing --> Essential for handling missing values, encoding categories, and scaling features. Model Building --> These are the most common baseline models used in interviews and real-world projects. Model Evaluation --> Always evaluate before deployment. Hyperparameter Tuning --> Critical for improving model performance. Pipelines --> A must-know concept for production-ready ML workflows. Dimensionality Reduction --> Used to reduce features and improve efficiency. Tip: If you know preprocessing + model training + evaluation + GridSearchCV + Pipeline, you already know 80% of what’s needed for ML interviews. Save this for your next project. Which library should I create next? Pandas / TensorFlow / PyTorch #ScikitLearn #MachineLearning #Python #DataScience #ArtificialIntelligence #MLInterview #DataAnalytics #AI
To view or add a comment, sign in
-
-
We are introducing HDMRLib, our open-source Python library for HDMR and EMPR. HDMRLib provides a unified workflow for decomposition, component analysis, and lower-order reconstruction, with support for NumPy, PyTorch, and TensorFlow, making it easy to integrate into modern deep learning workflows. Who might find it useful? • Researchers working on high-dimensional tensors and multivariate functions • People interested in interpretable decomposition and interaction analysis • Users who want to work across different numerical backends with a consistent API • Scientific computing and machine learning practitioners looking for a research-oriented, open-source tool If you’re working in machine learning, high-dimensional modeling, or related areas, feel free to explore it, use it, and share your feedback. Getting started is simple: 👉 pip install hdmrlib We have also submitted HDMRLib to JMLR MLOSS and would be very happy to hear feedback from the community. If you find it useful, we would truly appreciate your support by giving the repository a ⭐ on GitHub. GitHub: https://lnkd.in/dB9uigVb Documentation: https://lnkd.in/d5jD-Fxc Looking forward to your thoughts and discussions! Muhammed Enis Sen,Buğra Eyidoğan,Süha Tuna #OpenSource #Python #MachineLearning #ScientificComputing #PyTorch #TensorFlow #NumPy #ResearchSoftware
To view or add a comment, sign in
-
-
I am very happy and proud to share that, together with my brilliant students and colleagues Pinar Yalçın Güler, Muhammed Enis Sen and Buğra Eyidoğan, we have established HDMRLib, a tensor decomposition library for Python. This project reflects our collective research efforts and aims to provide practical, scalable implementations for high-dimensional modeling and tensor-based methods. We will continue to improve the library and expand it with new features actively. Please feel free to use it, and we would greatly appreciate your thoughts, feedback, and suggestions. GitHub: https://lnkd.in/dJU6yvVS Documentation: https://lnkd.in/d5gqQMwe
We are introducing HDMRLib, our open-source Python library for HDMR and EMPR. HDMRLib provides a unified workflow for decomposition, component analysis, and lower-order reconstruction, with support for NumPy, PyTorch, and TensorFlow, making it easy to integrate into modern deep learning workflows. Who might find it useful? • Researchers working on high-dimensional tensors and multivariate functions • People interested in interpretable decomposition and interaction analysis • Users who want to work across different numerical backends with a consistent API • Scientific computing and machine learning practitioners looking for a research-oriented, open-source tool If you’re working in machine learning, high-dimensional modeling, or related areas, feel free to explore it, use it, and share your feedback. Getting started is simple: 👉 pip install hdmrlib We have also submitted HDMRLib to JMLR MLOSS and would be very happy to hear feedback from the community. If you find it useful, we would truly appreciate your support by giving the repository a ⭐ on GitHub. GitHub: https://lnkd.in/dB9uigVb Documentation: https://lnkd.in/d5jD-Fxc Looking forward to your thoughts and discussions! Muhammed Enis Sen,Buğra Eyidoğan,Süha Tuna #OpenSource #Python #MachineLearning #ScientificComputing #PyTorch #TensorFlow #NumPy #ResearchSoftware
To view or add a comment, sign in
-
-
✨ A New Beginning in My AI/ML Journey As part of the Industry Immersion Program by MeetMux, Day 3 marked my transition from setup to execution. 🔹 What I tackled today: Built a basic data pipeline using Python, NumPy, and Pandas — focusing on how data is processed, structured, and analyzed. 🔹 What I learned : The concept of vectorization in NumPy — instead of using loops, operations can be applied to entire datasets at once, making computations significantly faster. This is a core technique used in real-world AI systems. 🔹 My goal: To continue building a strong foundation in data handling and move towards implementing real-world machine learning models by the end of this week. 🔗 My Work (GitHub): https://lnkd.in/gQNYJ8ce #AI #MachineLearning #Python #NumPy #Pandas #IndustryImmersion #LearningInPublic #MeetMux
To view or add a comment, sign in
-
PyCaret is a low-code Python library that makes machine learning much faster and easier. With just a few lines of code, you can handle preprocessing, compare models, and tune performance in a single workflow. It supports tasks like classification, regression, clustering, and time-series analysis, making it a practical choice for many real-world projects. The book Simplifying Machine Learning with PyCaret by Giannis Tolios is currently available for free: https://lnkd.in/eVFjfGKQ The book guides you step by step through typical PyCaret use cases, from setting up experiments to building, evaluating, and deploying models. It includes practical examples and clear explanations to help you apply PyCaret effectively in real projects. If you want a structured and hands-on introduction to PyCaret, this is a great resource. #machinelearning #python #datascience #ai #pycaret #lowcode #mlworkflow #datatools #analytics #statistics
To view or add a comment, sign in
-
-
🚀 Excited to share my latest project: Student Performance Prediction System I built a Machine Learning web application that predicts student performance based on various academic and demographic factors. 🔍 Key Highlights: • End-to-end ML pipeline (data preprocessing → training → prediction) • Built using Flask for deployment • Clean and interactive UI • Model serialization using dill 🌐 Live Demo: https://lnkd.in/gGBekFvt 💻 Tech Stack: Python, Scikit-learn, Pandas, NumPy, Flask This project helped me strengthen my understanding of real-world ML deployment and pipeline design. I would love your feedback and suggestions! 🙌 #MachineLearning #DataScience #Python #Flask #AI #StudentProject #MLProjects
To view or add a comment, sign in
-
🌳 Today I Learned & Implemented: Random Forest Today I worked on the Random Forest algorithm and implemented it in Python as part of my machine learning journey. 🔍 Random Forest is an ensemble learning technique that builds multiple decision trees and combines their outputs to improve prediction accuracy and reduce overfitting. 💡 Key Learnings: • How multiple decision trees work together (bagging) • Difference between single decision tree vs Random Forest • Model training, prediction, and evaluation • Importance of reducing overfitting in ML models 🧠 What I Did: ✔️ Built a Random Forest model using Python ✔️ Trained and tested it on dataset ✔️ Evaluated performance using accuracy metrics 📂 Project Link: https://lnkd.in/gjFfNV5H Excited to explore more advanced ML algorithms and improve model performance 🚀 #MachineLearning #RandomForest #Python #DataScience #AI #LearningJourney
To view or add a comment, sign in
-
📘 New Release from Deepsim Press We are pleased to announce the publication of: Practical Data Modeling and Machine Learning with Python From Data Preparation to Model Evaluation and Optimization This book presents a structured and practical approach to data modeling, emphasizing the complete workflow—from feature engineering and statistical modeling to machine learning, evaluation, and optimization. Rather than focusing on isolated techniques, it highlights how to build models that are reliable, interpretable, and applicable in real-world scenarios. Key topics include: • Data preparation and feature engineering • Regression and classification models • Ensemble methods and model improvement • Validation strategies and evaluation metrics • Hyperparameter tuning and model optimization • Model interpretation and explainability This title is part of the Practical Data Science with Python series, designed to guide readers from foundational analysis to advanced modeling and real-world applications. 📖 Available now: https://lnkd.in/gFFnegZH #DataScience #MachineLearning #Python #AI #Analytics #DataModeling
To view or add a comment, sign in
-
Smarter LLM outputs don’t always require bigger models. Introducing Inferscale 0.1.1—a lightweight Python package that improves response quality using inference-time scaling techniques. Designed for developers who want better results without increasing compute costs, Inferscale focuses on practical gains you can integrate quickly into your workflows. If you're building AI products, experimenting with prompts, or optimizing pipelines, this is worth exploring. Check out the README: https://lnkd.in/giq8KJ5g Let’s make LLMs more efficient together. #AI #LLM #Python #GenAI #MachineLearning #OpenSource #AIDev
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development