Exploring a clean way to serve ML models using FastAPI and Docker. 🧠 Trained a RandomForestClassifier on the Iris dataset ⚙️ Served predictions via FastAPI 🐋 Containerized with Docker for portability 🔥 Key learnings: - How to expose ML models as REST APIs - How to containerize and run in consistent environments - Why FastAPI + Docker is perfect for lightweight ML services Code: https://lnkd.in/dYZiss6Z #python #machinelearning #restapi #dataengineer #mlengineer #genai #fastapi
How to serve ML models with FastAPI and Docker
More Relevant Posts
-
I’ve been exploring TOON (Token-Oriented Object Notation) 🧩 It’s like a lightweight, token-efficient alternative to JSON — super handy when working with LLMs. To understand it better, I tried a few exercises and shared them here 👇 🔗 https://lnkd.in/dyvymUki #LLM #AI #Python #MachineLearning #GenerativeAI #OpenSource #DataScience #JSON #TOON #PromptEngineering
To view or add a comment, sign in
-
🚀 Test CI/CD Pipeline with GitHub Actions 🧑💻 I have created a simple test CI/CD pipeline using GitHub Actions for a Python project. It's just a basic test to get familiar with the process. This could be helpful for beginners who would like to learn how to set up a CI/CD pipeline. 🚀 🔧 What it does: Automatically runs tests when code is pushed. Deploys if tests pass. Check out the repo here: https://lnkd.in/eB5cyKbZ #GitHubActions #CICD #TestProject #Python #Beginners #Automation
To view or add a comment, sign in
-
I built a small tracer from scratch in Python. It automatically creates spans for function calls (like requests.get) using: 𝗠𝗼𝗻𝗸𝗲𝘆 𝗽𝗮𝘁𝗰𝗵𝗶𝗻𝗴 – intercepting existing library functions and wrapping them with tracing logic, without modifying the original code. 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝘃𝗮𝗿𝗶𝗮𝗯𝗹𝗲𝘀 (𝗰𝗼𝗻𝘁𝗲𝘅𝘁𝘃𝗮𝗿𝘀) – these are like thread-local storage but designed for async and concurrent programs. They let each coroutine or thread safely keep its own tracing context, so even with parallel requests, the tracer knows which span belongs to which flow. 𝗪𝗵𝗮𝘁’𝘀 𝗮 𝘀𝗽𝗮𝗻? A span represents one unit of work — for example, a function execution or an API call. Spans can be nested (parent-child), forming a tree that shows the entire flow of a request across different components. Each span is logged to a file (trace_log.jsonl), capturing parent-child relationships and timing details. Later, this data can be visualized to see how requests flow through a system — similar to what full-fledged tracing tools like OpenTelemetry do. GitHub: https://lnkd.in/gcR7XCwc
To view or add a comment, sign in
-
Day 6 of My 100 Days of AI & Data Engineering Challenge! Today was all about sharpening my Advanced Python concepts Here's what I explored: ➡️ Iterators: An iterator is an object that allows you to traverse through a sequence one element at a time. ➡️ Generators: A generator is a special type of iterator that yields values lazily saving memory and boosting performance. 📝 Logging — The Backbone of Maintainable Code Today I also dived deep into Python’s logging module and realized that good logging is not optional… it’s essential. Here’s a quick snapshot of the logging levels I learned and practiced: 1.Learned how to configure logging import logging ### Configuring Logging in Python import logging logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S',filename='app.log', filemode='w') logging.debug("This is a debug message") logging.info("This is an info message") logging.warning("This is a warning message") logging.error("This is an error message") logging.critical("This is critical") 2.DEBUG → Used for detailed diagnostic information. 3.INFO → Confirms that things are working as expected. 4.WARNING → Indicates something unexpected but not breaking. 5.ERROR → Something failed, but the program can still continue. 6.CRITICAL → A severe error that might stop the application entirely. I also practiced configuring logging handlers, formatting logs, and adding logs to a sample script— 🧩 Why This Matters Whether it's building data pipelines, deploying ML models, or working with distributed systems, these foundational concepts make your code: ✔️ More efficient ✔️ Easier to debug ✔️ More scalable Please find here my code in GitHub repository: https://lnkd.in/gWqDuFKP #100DaysOfAI #DataEngineering #Python #AIChallenge
To view or add a comment, sign in
-
Exploratory Data Analysis on the Iris Dataset using Python Performed an in-depth exploratory data analysis (EDA) on the classic Iris dataset, a foundational dataset in data science and machine learning. Using Python, Pandas, Matplotlib, and Seaborn, I examined patterns and visualized relationships among different flower species. Key steps and findings: 🔹 Cleaned and prepared the dataset by handling missing values and duplicates. 🔹 Explored feature distributions and relationships using pairplots and heatmaps. 🔹 Identified clear visual separations among Iris species, showing potential for effective classification. 🔹 Analyzed statistical summaries and correlation structures between petal and sepal measurements. This project demonstrates strong skills in data cleaning, visualization, and exploratory data analysis, turning raw data into meaningful visual insights. https://lnkd.in/gwK4PTQm #DataScience #Python #EDA #MachineLearning #Seaborn #Matplotlib #IrisDataset #DataVisualization #Pandas #Analytics
To view or add a comment, sign in
-
💻 Python Command-Line Quiz Game Project (Project 17)! I just wrapped up a fun little project: a fully functional, object-oriented quiz game implemented right in the Python command line! This project was a great way to practice Object-Oriented Programming (OOP) principles in Python by breaking the game down into logical, reusable components. 🌟 Key Features OOP Structure: The project is organized into three main classes, making the code clean, readable, and easy to maintain: Question: Simple class to hold the text and the correct answer for each question. QuizBrain: The core logic class that manages the question list, tracks the user's score, checks answers, and moves to the next question. main.py: Drives the game. It takes raw data, converts it into Question objects, and uses the QuizBrain to run the quiz loop. Dynamic Question Flow: The QuizBrain class handles the entire game flow, prompting the user for answers and automatically keeping score. Case-Insensitive Answer Checking: The check_answer method ensures the quiz is user-friendly by accepting "True," "true," "tRuE," etc., as correct. 🛠️ Tech Stack & Files File NamePurposequestion_model.pyDefines the Question class.quiz_brain.pyContains the QuizBrain logic (score tracking, answer checking, next question).game_data.pyStores the raw list of questions and answers.main.pyInitializes the game, builds the question bank, and runs the main quiz loop.🚀 How It Runs The game loop continues as long as there are questions left, giving the user their score after every question, and finally printing a final score summary. Python # Output Example: Q.1. Ada Lovelace is often considered the first computer programmer. (true/false)?: true you got it right the correct answer is True your score is : 1/1 Q.2. JavaScript derives from a later version of Java (true/false)?: true that's wrong the correct answer is False your score is : 1/2 you've completed the game, your final score is 8/10 github link: https://lnkd.in/eKCuaQTv This was a great exercise in applying fundamental Python concepts and good coding structure! #Python #OOP #Programming #CodingProject #CommandlineGame #QuizGame
To view or add a comment, sign in
-
Learning to build AI using LangChain. This project demonstrates real-world usage of prompt chains, memory, and API/tool integrations — a step toward GenAI automation. Exploring deeper into LLM orchestration & intelligent workflows. 🔗 GitHub: https://lnkd.in/dwN3dBKz #LangChain #GenAI #LLMAgents #AIEngineering #Python
To view or add a comment, sign in
-
Library Management System using Python & Streamlit I am pleased to share my recent project — a Library Management System developed using Python, Streamlit, and Pandas. This web-based application allows users to efficiently manage a library’s collection through the following key functionalities: 📘 View Books: Display the complete list of books available in the library. ➕ Add Books: Add new book entries with title, author, and availability status. 📖 Issue Books: Update the record when a book is issued. 🔁 Return Books: Mark an issued book as returned. 💾 Persistent Storage: Data is stored and managed using a CSV file for easy access and updates. 🧠 Technologies Used: Python – Core programming logic and data handling Streamlit – Web interface and user interaction Pandas – Data manipulation and storage File Handling (CSV) – Persistent local data storage This project enhanced my understanding of data management, Streamlit-based UI development, and Python file operations. 🔗 Live Application: https://lnkd.in/gDMXu4-E 💻 GitHub Repository: https://lnkd.in/gdjNRSaa #Python #Streamlit #DataScience #SoftwareDevelopment #WebApplication #ProjectShowcase #LibraryManagementSystem #GitHub #LearningByBuilding
To view or add a comment, sign in
-
Shipping: first-run setup for our Python CLI (safe venv swaps included) I’ve been stubborn for a while: CLI must run inside repo virtualenv. Simple rule… until first run makes people suffer. Today I shipped a small wizard to make the right thing happen by default: Pick Python 3.11–3.15. It finds what you have, you choose, it creates repo .venv. Warm start. Upgrades pip/setuptools/wheel, installs a tiny baseline so pulsar is useful right away. Safe side-venv swap. No more “ah, I deleted interpreter I’m running under.” We do atomic move + re-exec, so process jumps safely. State that sticks. Saved in .plsr/setup/state.json, so next runs land in exact interpreter you picked. Bootstrap guardrails. If PULSAR_SKIP_VENV_BOOTSTRAP=1, we don’t loop on ourselves during re-exec. What I argued with myself about: Just write docs for pyenv + uv? → Nyet. Docs are nice, consistency is better. Code path > doc path. “One global toolchain or per-repo .venv?” → Per-repo. Less brain tax for contributors and CI, upgrades are easier to reason about. Delete & recreate vs safe swap? → Safe swap. Deleting env you’re standing in—this is how you get surprise unhappy day. Hide all the knobs? → Guide by default, escape hatches there: PULSAR_ENV, PULSAR_CONFIG, skip flag to avoid recursion. Under the hood (tiny tour): ensure_venv(argv) handles first run, writes/reads setup state, re-execs under chosen Python. repo_root_from_here() finds root even in weird layouts. entry.run_main() does early wiring: logging, platform/env export, secrets, arg parsing, and a friendly header with project@version, service type, and normalized env (development → dev, production → prod). select_header_env() decides the badge: pulsar start <env> first, then PULSAR_ENV/ENV, else no badge. Next up Structured logging that “just works.” Pretty locally, JSON in CI, zero config. I’ll fold it into setup_logging() so subcommands inherit sane defaults. Why it matters Faster onboarding: new folks get running in one go. Safer upgrades: we can move baseline interpreter without breaking someone’s shell. Predictable CI: same behavior locally and in pipelines. If you solved first-run ergonomics differently especially safe swaps or Python auto-discovery ping me. Happy to compare notes. 🙌 Repo is here: → https://lnkd.in/eWHX6CKB #OpenSource #DevTools #DevOps #SRE #Python #CICD #Terraform #Kubernetes #CLI #Automation #DeveloperExperience #TelegramBots #AIAgents #DX #plsrdev
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
nice work Ayush