I got tired of writing the same ML boilerplate over and over. So I built a full AutoML platform from scratch — this weekend. Here's what it does: ↳ Upload any CSV dataset ↳ Auto-detects Classification vs Regression ↳ Preprocesses data automatically (encoding, scaling, imputation) ↳ Trains 4 models with GridSearchCV hyperparameter tuning ↳ Picks the best model and explains WHY using SHAP ↳ Shows live training progress via WebSockets And it's not a Jupyter notebook or a Streamlit script. It's a proper full-stack product: ⚛️ React frontend with glassmorphism UI ⚡ FastAPI backend with REST + WebSocket API 🐳 Fully containerised with Docker Compose 🧠 scikit-learn + SHAP for ML + explainability One command to run everything: docker compose up --build This is the kind of tool I wish existed when I started in ML. Building things that solve real problems is what I love doing. #MachineLearning #Python #React #FastAPI #Docker #MLOps #OpenToWork #FullStack #DataScience
More Relevant Posts
-
The problem: Building an ML pipeline means writing hundreds of lines of Python, managing scikit-learn/XGBoost/pandas dependencies, configuring APIs for Slack/Teams/Email, and debugging data transformations. My solution: FlowCraft — a no-code visual workflow engine. → Drag a "File Upload" node, connect to "Data Cleaner", pipe into "Random Forest", wire to "Evaluate", and hit Execute. → Need to compare models? Run Random Forest, XGBoost, and SVM in parallel, then auto-select the best. → Want to notify your team? Connect a Slack or Teams node at the end. → Don't know where to start? The AI Copilot generates the entire workflow from a sentence. Built with React 19, FastAPI, scikit-learn, and ReactFlow. 100+ node types across 26 categories. Would love your feedback! #DataScience #MachineLearning #AI #Python #React #NoCode #Automation #WorkflowAutomation
To view or add a comment, sign in
-
🧠 Built a Handwritten Digit Recognizer from scratch using CNN + Flask! Drew digits on a canvas → sent to a Flask API → CNN predicts in real time. Here's the full stack I used: 🔹 TensorFlow/Keras — CNN trained on MNIST (99% accuracy) 🔹 Flask — REST API backend with base64 image decoding 🔹 HTML5 Canvas + Vanilla JS — Zero framework, pure frontend The model architecture is simple but powerful: Conv2D → MaxPool → Conv2D → MaxPool → Dense → Softmax Only 5 epochs to hit ~99% test accuracy. This project taught me how deep learning models actually go from training to deployment — not just a notebook, but a working web app. 🔗 GitHub: https://lnkd.in/gtRAUCMs #DeepLearning #CNN #TensorFlow #Flask #MachineLearning #ComputerVision #Python #Digit #DataScience
To view or add a comment, sign in
-
🎬 Built a Production-Ready Movie Recommendation System using Machine Learning Over the past few weeks, I worked on building an end-to-end recommendation system that goes beyond basic models and focuses on real-world performance and usability. 🔹 What I built: • Hybrid Recommendation System combining SVD (Collaborative Filtering) + Machine Learning model • Generates personalized recommendations for each user • Displays movie posters using TMDB API • Interactive UI built with Streamlit 🔹 Key Improvements: • Optimized recommendation time from ~10 seconds to ~1–2 seconds using candidate filtering • Implemented rating normalization (0–5 scale) for consistent predictions • Secured API keys using environment variables (no hardcoding) • Fixed UI issues and built a clean, responsive layout 🔹 Tech Stack: Python | Pandas | Scikit-learn | XGBoost | Scikit-surprise | Streamlit | Docker 🔹 Deployment: Deployed using Hugging Face Spaces with Docker This project helped me understand not just model building, but also performance optimization, deployment challenges, and building user-friendly ML applications. 🔗 GitHub: https://lnkd.in/gGeBPE5f 🔗 Live Demo: https://lnkd.in/gr4_nFdz #MachineLearning #RecommendationSystem #DataScience #Python #Streamlit #MLOps #AI #Projects
To view or add a comment, sign in
-
🚀 Built an End-to-End MLOps Pipeline using MLflow + Prefect + Flask! Excited to share my latest project where I implemented a complete machine learning lifecycle — from training to deployment 🔥 💡 Project: Sentiment Analysis MLOps Pipeline 🔹 What I built: ✅ MLflow for experiment tracking, metrics, and model versioning ✅ Hyperparameter tuning with multiple runs (model comparison) ✅ Model Registry for version control (v1 → v9) ✅ Flask app for real-time sentiment prediction ✅ Prefect workflow for automated training pipelines ✅ Dashboard monitoring for workflow execution ⚙️ Tech Stack: Python | MLflow | Prefect | Flask | Scikit-learn | NLTK 📊 Key Highlights: Automated retraining pipeline using Prefect Experiment tracking and visualization using MLflow Production-style model versioning and deployment End-to-end reproducible ML pipeline 🚀 Every time the pipeline runs, a new model is trained, tracked, and registered automatically — just like real-world ML systems! 🔗 GitHub Repo: 👉 https://lnkd.in/gHw-s2PM 📌 This project helped me deeply understand how MLOps works in real production environments I’d love to hear your feedback and suggestions! 🙌 #MLOps #MachineLearning #MLflow #Prefect #Flask #DataScience #AI #OpenToWork
To view or add a comment, sign in
-
I stopped hand-rolling CUDA builds. The Hugging Face Kernel Builder changed my workflow. If you have spent time wrestling with CUDA builds — compiler flags, ABI mismatches, PyTorch version hell — then the Hugging Face Kernel Hub is worth your full attention. Launched in mid-2025, it is three things in one: a hub for community-shared kernels, a Python library to load them instantly, and a build tool called kernel-builder that compiles your custom C++ kernels for every PyTorch and CUDA version automatically. What it solves: Building a custom CUDA kernel that works across PyTorch 2.4 through 2.7, across CUDA and ROCm, across multiple glibc versions — that is not one build. It is a matrix of dozens. kernel-builder manages that entire matrix using Nix for reproducibility, so you write the kernel once and it handles the rest. Using a kernel in 3 lines: //CODE IN PYTHON from kernels import get_kernel activation = get_kernel("kernels-community/activation", version=1) activation.gelu_fast(y, x) No compilation. No build flags. It downloads the right binary for your environment automatically. Building and sharing your own kernel: Write your kernel in C++ with a build.toml config. Build with kernel-builder using Nix or Docker — it compiles for every supported PyTorch version in one pass. Push to the Hub with the kernels upload command. Anyone can use it via get_kernel("your-username/your-kernel"). The kernels-community org already maintains optimized kernels for FlashAttention, RMSNorm, GELU, quantization (INT4/INT8), and Mixture of Experts routing. I am still learning the build system — Nix has its own learning curve — but the payoff is real. No more broken environments. No more architecture-specific wheels. No more half-day compile sessions. If you are working with custom CUDA or ROCm code and have not looked at this yet, the official walkthrough on the Hugging Face blog is the right place to start. #HuggingFace #CUDA #KernelHub #GPU #DeepLearning #PyTorch #MLOps #LearningInPublic
To view or add a comment, sign in
-
-
🚀 Introducing ALGO_TRACKER.AI – Bridging Machine Learning with Static Code Analysis for Python. As software systems scale, quantifying Technical Debt and maintainability becomes crucial. Traditional rules-based linters often miss the complex interplay of metrics that define genuine code risk. To address this, I built ALGO_TRACKER.AI, an intelligent auditor that moves beyond rigid rules. It leverages a trained XGBoost model to analyze static code metrics (LOC, Cyclomatic Complexity, Halstead Metrics) recursively fetched from any public Python repository via the GitHub API. The goal is simple: Provide developers and tech leads with a predictive, probability-based "Bullish" (Clean/Maintainable) or "Bearish" (High Technical Debt) rating for their codebase. Key Features: 🔹 Deep recursive scanning of Python (.py) files using GitHub’s /git/trees API. 🔹 Static Metric Extraction (Radon/Lizard) to quantify complexity. 🔹 Intelligent Risk Prediction using an optimized XGBoost classifier. Tech Stack (High Performance & Scalable): ⚛️ Frontend: React, Tailwind CSS (Deployed on Netlify) ⚡ Backend: FastAPI (Python), (Deployed on Railway) 🤖 Machine Learning: Scikit-learn & XGBoost Check out the working prototype here: https://lnkd.in/g2tVERcH #MachineLearning #SoftwareEngineering #Python #FastAPI #ReactJS #FullStack #ArtificialIntelligence #Innovation
To view or add a comment, sign in
-
Moving beyond Jupyter Notebooks 🚀 I’ve just wrapped up a project that takes Machine Learning from a static script to a fully functional, production-ready pipeline: The Student Performance Predictor. 📊 While many ML projects live and die in a .ipynb file, I wanted to build something that mirrors a real-world industry workflow. This project predicts a student's math score by analyzing a mix of demographic data and academic history. What makes this "Production-Ready"? Instead of one long script, I built a modular architecture: 1. Data Ingestion: Automated loading and train-test splitting. 2. Transformation: A robust pipeline using ColumnTransformer to handle scaling and categorical encoding simultaneously. 3. Model Factory: Systematically trained and tuned multiple algorithms, including XGBoost and CatBoost, to find the highest R2 score. 4. Deployment: Wrapped the final model in a Flask API to serve real-time predictions. The Tech Stack: 🐍 Python | 🐼 Pandas & NumPy | 🤖 Scikit-Learn | 🚀 XGBoost & CatBoost | 🌐 Flask Building this helped me dive deep into writing clean, maintainable code and understanding how to package an ML model for the real world. Check out the code on my GitHub! (Link in comments ⬇️) #MachineLearning #DataScience #Python #SoftwareEngineering #MLOps #WebDevelopment #StudentSuccess
To view or add a comment, sign in
-
Bridging the gap between Machine Learning and Production: An Uncertainty-Aware Forecasting System 🌤️ Most weather apps give you a single deterministic number. But in the real world, data is rarely 100% certain. I’ve spent the last few weeks building a weather forecasting system that doesn't just predict the temperature—it communicates confidence ranges and handles real-time environmental data. Key Engineering Highlights: 🔹 Machine Learning: Uses an XGBoost Regressor for recursive 7-day forecasting, with dynamic uncertainty calibration (95% confidence intervals). 🔹 Live Data Anchoring: Integrated the Open-Meteo API to ensure forecasts are anchored to real-world "Day 0" conditions. 🔹 Modern Stack: Built a decoupled architecture using FastAPI (Python) for the logic and React + Tailwind CSS for a premium, dark-mode UI. 🔹 DevOps & Deployment: Fully containerized using Docker & Docker Compose for seamless environment management. Moving from monolithic Python scripts to a modern, containerized Full-Stack architecture was a massive learning experience in system design and dependency management. Check out the full source code and documentation in the comments below! 👇 #MachineLearning #ReactJS #Python #FastAPI #Docker #FullStack #BuildingInPublic #CSStudent #DataScience
To view or add a comment, sign in
-
Building a scalable and production-ready Machine Learning project structure 🚀 From data ingestion to model deployment, this setup covers: ✔️ Modular code design ✔️ Feature engineering pipelines ✔️ Training & inference pipelines ✔️ API integration (FastAPI/Flask) ✔️ CI/CD with GitHub Actions ✔️ Dockerized deployment A solid structure is the foundation of every successful ML system. #MachineLearning #MLOps #DataScience #Python #AI #GitHub #Docker #FastAPI
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development