Code: https://lnkd.in/dtfG73ir 🚀 Just wrapped up learning FastAPI, and I'm excited to share my journey! Over the past few days, I've been diving deep into backend development with FastAPI, and it's been an incredible learning experience. Here's what I've built and learned: 🔧 What I Built: A complete Patient Management API with full CRUD operations - creating, reading, updating, and deleting patient records. But it wasn't just about basic operations; I implemented features like automatic BMI calculation, health status categorization, and smart data filtering. 📚 Key Concepts I Mastered: API Fundamentals - Understanding how APIs work as bridges between applications, and why they're crucial in modern software development. HTTP Methods - GET for retrieving data, POST for creating new records, PUT for updates, and DELETE for removal. Each serves a specific purpose in RESTful design. Data Validation with Pydantic - This was a game-changer! Learning how Pydantic models automatically validate incoming data, enforce type safety, and even compute fields like BMI based on other inputs saved me from writing tons of validation code. Request/Response Flow - Tracing how data transforms from JSON (what the user sends) → Pydantic models (validated objects) → Python dictionaries (easy manipulation) → back to JSON (response). Understanding these conversions was crucial. Smart Filtering & Sorting - Implemented query parameters for flexible data retrieval, allowing users to search by multiple criteria, sort by different fields, and filter by ranges. Error Handling - Using HTTP status codes properly and providing meaningful error messages makes APIs much more user-friendly. Path vs Query Parameters - Path parameters identify specific resources, while query parameters filter and customize. 💡 Biggest Takeaway: The real understanding was why we convert between different data formats. Converting dictionaries to Pydantic models isn't just busy work - it triggers automatic recalculations of computed fields, ensures data consistency, and catches errors before they reach the database. #FastAPI #BackendDevelopment #Python #API #WebDevelopment #LearningInPublic #SoftwareEngineering
FastAPI Patient Management API with CRUD Operations and Data Validation
More Relevant Posts
-
🚀 Why FastAPI + Pydantic feels like cheating (in a good way!) Ever wondered how some APIs just work flawlessly without endless debugging? That’s the magic of FastAPI powered by Pydantic 💡 When I started exploring FastAPI, one thing blew my mind — 👉 You don’t handle validation… it just happens automatically Here’s what makes it insanely powerful 👇 ✅ Data validation out of the box Just define your schema using Pydantic models, and FastAPI ensures only valid data enters your system ✅ Type safety like never before Whether it's "EmailStr", "HttpUrl", "int", or custom types — everything is strictly validated ✅ Zero manual checks No more writing if-else validation logic. Just import and use! from pydantic import BaseModel, EmailStr, HttpUrl class User(BaseModel): name: str email: EmailStr website: HttpUrl 💥 Boom — invalid email? rejected 💥 Wrong URL? rejected 💥 Missing field? rejected ✅ Custom validation with field validators Need business logic? Add your own validators easily from pydantic import field_validator class User(BaseModel): age: int @field_validator("age") def check_age(cls, v): if v < 18: raise ValueError("Must be 18+") return v ✅ Clean APIs, smooth data flow Because your data is already validated, your API logic stays clean and focused --- 🔥 The best part? FastAPI + Pydantic turns your API into a self-validating system No messy code No unexpected crashes Just smooth, reliable data flow ⚡ --- If you're building APIs and NOT using this combo… You're probably doing extra work 😄 #FastAPI #Pydantic #Python #BackendDevelopment #APIs #100DaysOfCode #TechLearning
To view or add a comment, sign in
-
📣 4 releases. A few weeks. 7000 Downloads SynapseKit v1.4.3 through v1.4.6 are live and this one goes out to Dhruv Garg, Abhay, Adam Silva, and every engineer who opened an issue or merged a PR. This is yours too. 🙌 Here's everything that landed: 9 vector store backends — swap without rewriting a line Weaviate, PGVector, Milvus, and LanceDB join the lineup. All behind the same VectorStore interface. Weaviate v4 native client, PostgreSQL + pgvector with async psycopg3, Milvus with IVF_FLAT and HNSW index support, LanceDB embedded with no server required. pip install synapsekit[weaviate] pip install synapsekit[pgvector] pip install synapsekit[milvus] pip install synapsekit[lancedb] Subgraph error handling — four failure strategies subgraph_node() now handles failures gracefully: 🔁 on_error="retry" — re-run up to N times before raising 🔀 on_error="fallback" — swap in an alternative graph on failure ⏭️ on_error="skip" — continue the parent graph silently 💥 on_error="raise" — default, zero overhead On any handled failure, the parent state gets a __subgraph_error__ key with exception type, message, and attempt count. Fully backward-compatible. 2 new loaders — XMLLoader (stdlib only, zero deps) and DiscordLoader (messages, pagination, rich metadata) 2 new providers — SambaNova Cloud for fast open model inference, GoogleDriveLoader for pulling Docs, Sheets, PDFs, and folders directly into RAG pipelines Where SynapseKit stands today: 27 providers · 9 vector backends · 41 tools · 18 loaders · 1,450 tests · 2 hard dependencies ⚡ pip install synapsekit[all] #Python #LLM #RAG #AI #OpenSource #MachineLearning #Agents #SynapseKit
To view or add a comment, sign in
-
They say 90% of software engineering is debugging, and today I definitely felt that! 😂 After a marathon session of untangling server conflicts, navigating API versioning updates, and restructuring database schemas on the fly, I am thrilled to finally share my latest project: NutriScan-AI. 🚀🍏 I wanted to build something that bridged the gap between raw data and practical, everyday AI. NutriScan-AI is a full-stack web application that allows users to snap a photo of any meal and instantly receive a complete nutritional breakdown and ingredient analysis. 🧠 How it works under the hood: Frontend: A clean, dark-mode UI built with HTML/CSS that handles user image uploads. Backend: A robust Python (Flask) server handling the API routing and logic. AI Integration: Integrated Google's Gemini 2.5 Flash Vision API to process the image pixels and accurately identify complex food items. Database: Engineered a PostgreSQL relational database to securely log user scans and perform fuzzy-search lookups for detailed macro-nutrients (Calories, Protein, Carbs, Fat). git - https://lnkd.in/gW7VqJrM Always learning, always building. On to the next challenge! #ArtificialIntelligence #Python #Flask #PostgreSQL #FullStackDevelopment #GeminiAI #SoftwareEngineering #TechJourney #StudentDeveloper
To view or add a comment, sign in
-
Transforming Bedtime Stories with GenAI: Building "Taploo Nest" 📖✨ I’m excited to share a project I’ve been working on: Taploo Nest, a full-stack storytelling engine designed to create personalized adventures for children. What started as a simple script evolved into a complete application that combines creative AI with structured data management. The Tech Behind the Magic: 🧠 LLM Integration: Powered by Gemini 3.1 Flash to generate highly creative, themed stories (from Space Mysteries to Funny Rhymes). 🐍 Frontend: Built with Streamlit, featuring a custom "Parchment" UI and responsive Story Cards. 🗄️ Database: Implemented MySQL to create a permanent "Magic Library," allowing stories to be archived and revisited. 🔐 Security & Logic: Developed an Admin/Reader toggle. Only authorized users can "wave the magic wand" or manage the library, while others can enjoy the collection in "Reader Mode." This project taught me a lot about managing API states, handling database CRUD operations, and the importance of UI/UX in AI-driven tools. Check out the demo below to see it in action! 👇 #Python #GenerativeAI #Streamlit #MySQL #BuildInPublic #AIStorytelling #SoftwareEngineering Streamlit and @GoogleDevs
To view or add a comment, sign in
-
-
Every chunking library handles text pretty well. Tables though? They butcher them. HTML tables get split mid-row, columns lose context, and downstream models hallucinate because the structure is gone. We've heard this from teams at every scale. Today we're shipping v1.6 of our chunking library. Tables are now a first-class citizen. Here's what's in the release: 🔹 HTML Table Chunking New TableChef + TableChunker components that extract, normalize, and chunk tabular data while preserving row and column semantics. No more blind token-boundary splits through your structured data. 🔹 Self-Hostable Chunking API Run chonkie serve and you get a full FastAPI-powered chunking server. One command. No auth setup, no billing. Hit it from any language, any service in your infra. Ideal for polyglot stacks or teams that want to centralize all infrastructure. 🔹 Native Async Support Every chunking method now has an async equivalent. No more blocking your event loop. Long-requested, finally here. Finally, a note of thanks. Our python library now gets 120K+ downloads every week. Grateful to everyone who has contributed and continues to support our growth. 🦛 ❤️ #opensource #rag #llm #chunking #ai
To view or add a comment, sign in
-
-
The biggest update yet to our OSS library. Every AI builder knows the pain of working with tables. v1.6 solves it. Check it out!
Every chunking library handles text pretty well. Tables though? They butcher them. HTML tables get split mid-row, columns lose context, and downstream models hallucinate because the structure is gone. We've heard this from teams at every scale. Today we're shipping v1.6 of our chunking library. Tables are now a first-class citizen. Here's what's in the release: 🔹 HTML Table Chunking New TableChef + TableChunker components that extract, normalize, and chunk tabular data while preserving row and column semantics. No more blind token-boundary splits through your structured data. 🔹 Self-Hostable Chunking API Run chonkie serve and you get a full FastAPI-powered chunking server. One command. No auth setup, no billing. Hit it from any language, any service in your infra. Ideal for polyglot stacks or teams that want to centralize all infrastructure. 🔹 Native Async Support Every chunking method now has an async equivalent. No more blocking your event loop. Long-requested, finally here. Finally, a note of thanks. Our python library now gets 120K+ downloads every week. Grateful to everyone who has contributed and continues to support our growth. 🦛 ❤️ #opensource #rag #llm #chunking #ai
To view or add a comment, sign in
-
-
I developed a RAG-Powered Developer Documentation Assistant that transforms scattered engineering documentation into a fast, queryable knowledge system. Dev-Doc-AI: https://lnkd.in/guvs_PpB The Problem: Internal documentation is often fragmented, hard to search, and time consuming to navigate. The Solution: An intelligent assistant that retrieves the most relevant context and generates accurate, grounded answers in seconds. How It Works: - Ingests and indexes technical documentation into a persistent retrieval system - Uses semantic search to fetch the most relevant context per query - Generates context-aware responses using LLMs, reducing hallucinations - Accessible via API, web interface, and Slack automation Technical Highlights: - FastAPI backend orchestrating retrieval and generation pipeline - LlamaIndex-based RAG architecture with local index persistence - Context-grounded response generation for higher reliability - Integrated with n8n workflows for seamless team usage Tech Stack: Python • FastAPI • LlamaIndex • OpenAI API • HuggingFace Embeddings • n8n • Docker • Render • HTML/CSS/JavaScript This project enhanced my understanding of: - Retrieval-Augmented Generation (RAG) - LLM application design - Backend system orchestration - Building production-ready AI tools #RAG #LLM #OpenAI #FastAPI #LlamaIndex #AIProjects #DeveloperTools
To view or add a comment, sign in
-
-
Most RAG systems answer "what does this code do?" TraceRoot answers "why does this code exist?" — and that needs a completely different approach. Me and My friend Manikumar Garimella, built TraceRoot — Vectorless RAG Codebase Archaeologist to let you ask natural language questions about your codebase's decision history. Manikumar handled the reasoning and retrieval layer and wired up the Streamlit interface. Great experience building this together. The problem: you're staring at a function that bypasses rate limiting. You can read what it does. You cannot read why it was written that way, who objected in the PR review, or what customer incident forced the decision. Existing RAG tools fail here. They embed code into vectors and retrieve by similarity. Similarity is the wrong signal for causality. "This code looks similar to that code" is not the same as "this commit was caused by that issue." So we built something different. TraceRoot ingests your GitHub history and builds a typed provenance graph. Every edge has a type: modified, authored-by, motivated-by, reviewed-by, fixed-by. When you ask a question, the system walks that graph backwards through time following causal chains, not similarity scores. No embedding model. No vector database. BM25 + graph traversal + Groq Llama 3.3 for reasoning. The retrieval is entirely vectorless. Vector RAG cannot tell you that PR #88 was opened specifically to fix issue #421 — it can only tell you they talk about similar things. Graph traversal follows the actual edge. The repo is open source. Groq API is free. Setup takes 10 minutes. GitHub: https://lnkd.in/gK-nD3Cx If you have tried something similar or have thoughts on the approach, I would like to hear them. #RAG #LLM #Python #OpenSource #Groq #Streamlit #MachineLearning #SoftwareEngineering
To view or add a comment, sign in
-
3 months ago I started building something I had no idea how to finish. Today, it's complete. A Multi-Agent Financial Research System — built from scratch, on my local Ubuntu machine, between work, late nights, and a lot of chai. Here's what it actually does: 🔍 Agent 1 — scrapes real-time news via SerpAPI 📄 Agent 2 — reads 100-page SEC filings using RAG + ChromaDB 📊 Agent 3 — scores market sentiment from -1.0 to +1.0 📝 Agent 4 — writes the final investment report But before Agent 4 runs — the system PAUSES. A human clicks Approve or Reject on a React dashboard. Only then does the AI generate the final output. That's Human-in-the-Loop. The feature that took me the longest to get right — and the one that makes this actually trustworthy, not just impressive in a demo. The thing I'm most proud of though? It doesn't work like a straight line. Most pipelines go A → B → C and stop. This one loops back. If an agent finds a gap, it goes back to re-research. If the human rejects the report, the whole thing reruns with fresh context. That's a Cyclic Graph — and honestly, that's just how real research works. Nobody gets it right in one pass. I didn't either, by the way. There were weeks nothing worked. Nights where I had no idea why the state wasn't persisting. A Saturday completely lost to a ChromaDB bug. Moments where I genuinely thought about scrapping the whole thing. But I didn't. And now it works. ⚙️ The stack: → LangGraph — cyclic state machine + HITL checkpoints → LangChain — LLM + tool integration → ChromaDB — vector store for SEC filings → FastAPI — WebSocket streaming backend → React.js + Tailwind — live execution dashboard → PostgreSQL — full audit trail → Ollama (Llama 3) — runs 100% locally, zero API cost #BuildInPublic #AI #LangGraph #RAG #FinTech #Python #MultiAgent #LLM #MachineLearning
To view or add a comment, sign in
-
Stop spending days reading files just to understand a new codebase. I built the fix in 48 hours. Meet CodeMind 🧠 Ask "How does the auth flow work?" and it: → Decomposes your query into sub-tasks → Semantically searches your entire repository → Returns a cited answer traced to the exact file Zero hallucinations. Zero cloud. Zero API costs. ━━━━━━━━━━━━━━━━ The part that surprised me most wasn't the AI. It was the chunking logic. Character-based chunking destroys source code context entirely. Line-based chunks with overlap are the only way to preserve relationships between functions across file boundaries. If your chunks don't overlap → your RAG pipeline is blind. ━━━━━━━━━━━━━━━━ The stack I chose deliberately: 🗄️ Endee — vector DB running locally in Docker Expected it to be slow. Millisecond latency at 1B+ vectors. I was wrong. 🤖 Ollama — local LLM inference. Your code never leaves your machine. 🧠 all-MiniLM-L6-v2 — embeddings via sentence-transformers ⚡ FastAPI + Next.js — backend + frontend ━━━━━━━━━━━━━━━━ What's coming next: → GitHub repo indexing (paste URL, query instantly) → Multi-file diff analysis → 20+ language support ━━━━━━━━━━━━━━━━ Built with caffeine and one very persistent AttributeError: module 'jwt' has no attribute 'encode' 😅 Repo → https://lnkd.in/gkXU3nVf Would you use this in your local dev workflow? Comment below 👇 I read every reply. #buildinpublic #RAG #AI #opensource #LLM #VectorDatabase #Python #NextJS #FastAPI #CodeMind
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development