Not every system needs an API. But the moment two systems need to talk, you probably do. APIs sound complicated, but they’re really just organized doors into your system. If a mobile app, website, AI model, or another service needs your data, an API is how they get it. One of the fastest ways to build one today is Python + FastAPI. Here are general principles I use when building APIs. 1. First decide if you even need an API A quick rule I use: Build an API if • another system needs your data • a mobile or web app needs backend logic • you want multiple services to reuse the same functionality If your code is only used inside one script or one application, an API may be unnecessary overhead. 2. Scope the data before writing code You don’t want to build giant API that return everything. Instead ask: What is the one piece of information the caller actually needs? Example: Bad design /users → returns the entire user database Better design /users/{id} → returns one user APIs should expose small, focused data points. Smaller responses = faster systems and easier maintenance. 3. Create a simple endpoint Example with FastAPI: from fastapi import FastAPI app = FastAPI() @app.get("/status") def read_status(): return {"status": "API is running"} Plain English: Someone requests /status The system responds “API is running.” That’s an API. 4. Return useful data @app.get("/users/{user_id}") def get_user(user_id: int): return {"user_id": user_id, "name": "Isaiah"} Request: /users/1 Response: {"user_id": 1, "name": "Isaiah"} Now any app, system, or AI model can use that data. Why FastAPI is great for this FastAPI gives you: • high performance • automatic API documentation • built-in validation • clean Python code You can go from idea to working API in minutes. Every modern system runs on APIs. Apps. AI systems. Enterprise platforms. Government systems. Understanding how to design them is one of the highest leverage skills in tech. Follow me for more content on systems thinking, architecture, and building software that actually ships. #Python #FastAPI #APIDesign #SoftwareArchitecture #SystemsDesign #AIEngineering #TechCareers
APIs for Systems: Simplifying Interactions with Python and FastAPI
More Relevant Posts
-
Data Science is only 50% math. The other 50% is communication. 📊🗣️ As Data Scientists, we spend weeks cleaning data, engineering features, and fine-tuning models. But if that insight stays trapped in a Jupyter Notebook or a static PDF, it’s effectively invisible to the people who need to make decisions. That’s why I’m such a massive advocate for Streamlit. If you are looking to turn your data scripts into shareable, interactive web apps, it is a absolute game-changer. Here is why I recommend it to every developer in my circle: 🚀 Zero Frontend Fatigue: You don't need to learn React, Vue, or complex CSS. If you can write Python, you can build a professional UI. ⏱️ Speed to Value: It allows you to move from a concept to a live dashboard in hours, not weeks. This is crucial when you need to show a stakeholder a proof-of-concept quickly. 💡 Interactive by Design: Streamlit’s "Magic" commands make it incredibly easy to add sliders, maps, and filters that let users "play" with the data themselves. 🔗 Bridging the Gap: It turns a "Data Scientist" into a "Solution Architect." It allows us to deliver actual tools that drive operations, rather than just reports that sit on a shelf. Whether you're building a Decision Support System or a simple internal tool, Streamlit is the shortest path between your data and a user's decision. To my fellow Data Scientists: What’s your go-to tool for bringing your models to life? 👇 #Streamlit #Python #DataScience #DataVisualization #MachineLearning #WebDev #TechInnovation #BuildInPublic
To view or add a comment, sign in
-
-
Last weekend, I officially merged my first open-source PR into Headroom, a context optimization layer for LLM apps. Here's the backstory: I connected with Tejas Chopra a couple weeks ago and learned about how Headroom works under the hood. The core idea is compelling. Most tool outputs in AI agent workflows are massively redundant. A database query might return 1,000 rows when the LLM only needs the summary stats and the one row that errored. Headroom compresses that context by 50-90% using statistical analysis, not summarization. No extra LLM calls, no hallucination risk. It works as a proxy, a Python library, or a drop-in integration for frameworks like LangChain, LiteLLM, and Agno. As I explored the architecture with Tejas, I noticed an opportunity that matched my experience with LangChain and LangGraph: there was no built-in way to compress tool outputs between cycles in a LangGraph agent. LangGraph builds AI agents as state graphs where messages accumulate across cycles. After a few tool calls, the message history balloons with raw JSON that the model doesn't fully need. This is a known pain point (LangGraph issues #3717, #11405, #2140). So I built a compress_tool_messages hook, a graph node that sits between the tools node and the agent node. It scans ToolMessages, compresses the large ones with SmartCrusher, and passes the trimmed state back to the LLM. Small outputs and error messages are preserved untouched. If you're building LLM-powered apps and spending more on tokens than you'd like, or hitting context limits on agent workflows, Headroom is worth checking out. It's open source, runs locally, and you can get started with a single function call or by pointing your existing client at the proxy. PR #68: https://lnkd.in/gpWfWxZy Repo: https://lnkd.in/gmx9WYUY
To view or add a comment, sign in
-
With AI tools, spinning up a backend today takes minutes. Generate routes. Add controllers. Wire a database. Deploy an API. Done. Except… that’s not the backend. The real backend work begins after the first request hits production. When building services with Node.js, Laravel, or Python today, I focus on three things AI rarely considers: 1. Latency paths AI writes endpoints. It doesn’t analyze how many network hops your request just created. A “simple” endpoint can easily become: Client → API → Auth service → Database → Cache → External API → Response Every hop adds risk and delay. 2. Failure isolation AI-generated services often assume everything works. Production systems assume the opposite. Timeouts. Retries. Circuit breakers. Queue fallbacks. Your backend should degrade gracefully, not collapse. 3. Async thinking Modern backends are event-driven, not request-driven. Queues, workers, and background jobs separate user experience from heavy work. AI helps scaffold the code. But designing how work flows through a system is still an engineering decision. Fast code is easy. Resilient systems are designed. #BackendEngineering #NodeJS #Laravel #Python #SystemDesign #AIAgents #ScalableSystems
To view or add a comment, sign in
-
Stop debugging your AI Agents in the dark. Building with LangGraph is powerful, but once you add parallel nodes and cycles, the "black box" problem becomes real. Staring at terminal logs to find a state mutation error is a massive productivity killer. I built the Agent Debugger & Visualizer to solve this. It is a real-time observability layer that lets you step inside the agent's brain and change its mind mid-execution. Engineering Highlights: Human-in-the-Loop (HITL): Pause any node, manually edit the state JSON in the UI, and hit "Resume" to steer the agent in a new direction. State Diffs (RFC 6902): No more digging through 500-line JSON blobs. I’m tracking deltas so you see exactly what changed after each node execution. Real-time Cost Attribution: Every node tracks its own token usage and dollar cost in real-time. Async Critic Scoring: A background LLM scores agent reasoning without blocking the main execution loop. Tech Stack: Frontend: Next.js 14 & React Flow (Vercel) Backend: FastAPI & LangGraph (Render) Streaming: Redis Streams for low-latency trace delivery. Check out the demo and code: 🎥 Video Demo: https://lnkd.in/ecvxMDVx 🚀 Live Tool: https://lnkd.in/eE_AcsWD 💻 GitHub: https://lnkd.in/e9pUCHvw I am currently looking for my next challenge in the AI and Software Engineering space. If you are building agentic workflows and want to talk about observability or distributed systems, let’s connect! #AI #LangGraph #Python #SoftwareEngineering #OpenSource #LLMOps #NextJS #FastAPI
To view or add a comment, sign in
-
I built a full-stack AI appointment booking assistant — with ₹0 in API costs. No OpenAI. No monthly subscriptions. Just Python, FastAPI, and Ollama running a 120B parameter model. Here's what it does: → Chats with patients naturally via a browser UI → Recommends dental services based on symptoms → Checks real-time slot availability from a live database → Books appointments and stores them permanently → Prevents double-booking automatically The full stack: • Backend → Python 3.11 + FastAPI • Database → SQLite + SQLAlchemy ORM • AI Model → Ollama gpt-oss:120b-cloud • Frontend → HTML + CSS + Vanilla JS • Server → Uvicorn I wrote a complete step-by-step tutorial covering: ✅ Ollama installation & model setup ✅ Python virtual environment setup ✅ Full database schema design & seeding ✅ All 6 REST API endpoints ✅ The complete chat UI from scratch ✅ GitHub deployment No prior AI experience needed. If you know basic Python, you can follow along. 📖 Full article on Medium → https://lnkd.in/dNGnGykT 💻 Complete source code on GitHub → https://lnkd.in/dKp8JkYH If you're learning Python backend development or want to build AI-powered apps without cloud bills, this is a good starting point. #Python #FastAPI #AI #Ollama #OpenSource #WebDevelopment #SQLite #LLM #BackendDevelopment #Tutorial #Beginner #AiAssitant #AiPoweredApp
To view or add a comment, sign in
-
-
🚀 Poly-Glot Code Comment Library v1.1.0 is here — now in your terminal and your editor 🌐 poly-glot.ai 📦 npm install -g poly-glot-ai-cli Well-documented code is: 📚 Searchable — better RAG retrieval 🧠 Understandable — AI grasps intent 🎯 Discoverable — GEO optimized 🛡️ Safer — preserves critical context When I launched poly-glot.ai as an AI-powered code documentation tool, the goal was simple: help developers write better comments, faster. No sign-up. No backend. Just paste your code or upload a file directly — get professional JSDoc, Javadoc, PyDoc, Doxygen, and more instantly. 🚀 v1.1.0 takes that same idea and brings it everywhere you actually write code. 👇 ⌨️ CLI Tool — now live on npm npm install -g poly-glot-ai-cli poly-glot comment utils.py poly-glot comment --dir ./src --output-dir ./commented poly-glot explain app.js Zero dependencies. Comment a single file, an entire directory, or pipe from stdin. Works in CI/CD pipelines, pre-commit hooks, and anywhere you have a terminal. Supports OpenAI and Anthropic — config lives in ~/.config/polyglot/ or via environment variables. 📦 https://lnkd.in/g3Hzt-XH 💻 VS Code Extension — coming soon to the Marketplace Right-click any file in Explorer → "Poly-Glot: Comment This File." Or press Cmd+Shift+/ on a selection. Or Cmd+Shift+Alt+/ to comment the entire active file. The extension auto-detects your language and picks the right comment style — JSDoc for TypeScript, Javadoc for Java, PyDoc for Python, Doxygen for C++ — no configuration needed. 🎯 What makes this different Most AI tools make you leave your workflow to use them. Poly-Glot is the opposite — it meets you where you are. Web app, terminal, or editor. Same AI engine. Same language support. Same privacy-first approach — your API key never touches our servers. The stack behind it: 🪿 Built entirely with Goose (Block's open-source AI agent) ⚡ Vanilla JS on the frontend — zero dependencies, zero build step 🔑 Bring your own OpenAI or Anthropic API key 🌐 Hosted on GitHub Pages + Cloudflare If you're writing code every day and tired of undocumented functions, give it a try: #OpenSource #DeveloperTools #AI #JavaScript #TypeScript #Python #VSCode #CLI #npm #Documentation #SideProject #BuildInPublic
To view or add a comment, sign in
-
When your AI pipeline lives inside your main backend, everything feels fine — until it doesn't. 🧱 I hit that wall on MealPlan AI. NestJS handled both business logic and LLM orchestration. No observability. No crash recovery. No output validation. When a 7-day plan failed on day 5, it restarted from scratch. 🔄 Something had to change. 🏗️ The split → NestJS — business logic, auth, payments, BullMQ orchestration → Python FastAPI — all LLM orchestration, validation, AI tooling Why Python? 🐍 LangGraph and LangChain are Python-first. Fighting that with TS wrappers adds complexity without value. 🔗 The 5-node LangGraph StateGraph 1️⃣ prepare_context — dietary restrictions, calorie targets, RAG retrieval (2.2M recipes, 80K USDA foods) 2️⃣ generate_day — structured LLM call with diversity history 3️⃣ validate_day — programmatic checks first, LLM self-validation second 4️⃣ emit_day — streams DAY_COMPLETED SSE → NestJS persists via JSONB atomic append 5️⃣ update_history — enforces diversity across the full plan PostgreSQL AsyncPostgresSaver checkpointing means a failure on day 5 resumes from day 5. No wasted LLM calls. ✅ 🔒 Type safety across languages Zod (TS) → JSON Schema → Pydantic (Python). CI catches drift on every PR. 🚦 🚀 What this unlocked 📊 Langfuse — every LLM call traced with tokens, cost, latency 💾 Incremental persistence — users see progress, can resume interrupted plans 🔁 Granular regen — single day or single meal, with user feedback in prompts 🧪 329 pytest-asyncio tests — every node and validator covered independently 💡 Takeaways ▸ Separate AI from business logic early — the longer you wait, the harder the extraction ▸ Use LangGraph for multi-step workflows — linear chains break under validation loops ▸ Type contracts across languages — catch bugs at build time, not in prod ▸ Stream and persist incrementally — don't make users wait 30s to see anything Platform is live at meal-plan.app 🌐 Happy to talk architecture tradeoffs — drop a comment or book a call. 👇 #AIEngineering #LangGraph #NestJS #PythonBackend #LLM #SoftwareArchitecture
To view or add a comment, sign in
-
Bridging the gap between Data Science and UI: Auto-Analyst 📊🤖 Most data analysis tools require deep coding knowledge, but what if we could automate the entire pipeline? I’ve been exploring Auto-Analyst, a sophisticated open-source platform that turns raw data into insights using AI agents. The Tech Stack is impressive: 🔹 Frontend: Next.js & Tailwind CSS for a seamless dashboard experience. 🔹 Backend: Python (FastAPI) handling complex analytical logic. 🔹 AI Orchestration: Powered by LangChain for intelligent agent behavior. What makes it stand out: ✅ Automated Cleaning: AI-driven preprocessing of datasets. ✅ Dynamic Visualizations: Generates charts on the fly based on natural language queries. ✅ Deep Analysis: Goes beyond surface-level stats to provide meaningful business insights. As a developer, seeing such a clean integration of a Next.js frontend with a heavy-duty Python backend is inspiring. 👇 Repository link is in the first comment! #FullStack #DataAnalytics #NextJS #Python #AI #LangChain #OpenSource #AutoAnalyst
To view or add a comment, sign in
-
-
In his blog post Parametricity, or Comptime is Bonkers (link in comments), Noel Welsh writes .. > So yes - comptime is bonkers. But not entirely in the good way. Noel is not saying 𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎 is useless - it is genuinely powerful for metaprogramming, code generation, and specialising data structures (e.g. bitsets for integers, hash tables for everything else). But for ordinary generic programming it trades away the huge cognitive win of parametricity: the ability to understand what a function does just by looking at its type signature. Every time you see a 𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎 generic you have to read its body (or tests, or documentation) to know what it actually does for each type. That is the exact “comprehension tax” that parametricity was designed to eliminate. In short: Zig’s 𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎 violates parametricity by turning the type parameter into an inspectable compile-time value, allowing the generic function to branch on and behave differently for every concrete type - something that is provably impossible in any language that respects parametric polymorphism. I think this is mostly what Zig has decided to prioritise as part of its philosophy. Why Zig (deliberately) gives most of it up Zig's philosophy prioritizes: • Simplicity of the language itself - One mechanism (𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎) replaces generics + templates + macros + constexpr + conditional compilation + reflection + a bunch of type-system features other languages add piecemeal. • Zero-cost, predictable control over codegen - You write the specialization logic in normal Zig code (not template syntax, not trait impls, not match types), and the compiler monomorphizes exactly what you asked for. • C interop & "simple C replacement" mindset - Most Zig code lives close to hardware/performance-critical domains (embedded, games, tools, compilers, OS kernels), where people already think in terms of "this runs differently on ARM vs x86" or "ints are 32-bit here but 64-bit there". Hidden branches on type aren't alien - they're just another form of platform/target specialization. • Compile-time power without a separate macro/DSL language - You get metaprogramming that's debuggable with normal tools (breakpoints in 𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎 code in some IDEs, @𝚌𝚘𝚖𝚙𝚒𝚕𝚎𝙻𝚘𝚐, unit tests that run at compile time), and it's all Zig syntax. In exchange, Zig accepts: • Every generic function potentially requires reading its body (or good tests/docs) to understand what it actually does. • More risk of "surprise specialization" bugs when someone instantiates with an unexpected type. • Weaker local reasoning in generic-heavy code. In summary, Zig views that for systems / embedded / performance tooling / codegen-heavy code, the value of parametricity is lower, and the ergonomic wins of 𝚌𝚘𝚖𝚙𝚝𝚒𝚖𝚎 (no extra syntax, unified debug story, extreme flexibility) are higher. Zig is optimizing for that niche first.
To view or add a comment, sign in
-
Claude Code became the #1 AI coding tool this year. But most engineers use it like a fancy autocomplete. The difference between "meh" and "this thing reads my mind" is one file: CLAUDE.md. I spent a week refining mine for NestJS projects. The output quality difference was night and day. Here's the template I use on every repo now: # Project Architecture Backend: NestJS + TypeScript DB: PostgreSQL + Redis AI: LangChain + OpenAI SDK # Code Standards - Functional controllers, NO classes - Zod validation on all inputs - Named exports only - Error: always use HttpException wrapper # Test Commands $ npm run test:unit $ npm run test:e2e # AI Rules - NEVER generate console.log (use NestJS Logger) - Always add JSDoc on public methods - Run lint before committing - Repository pattern for all DB access # Architecture Patterns - Event-driven with EventEmitter2 - DTOs validated at controller level - Services handle business logic only That "AI Rules" section is doing the heavy lifting. Before I added it, Claude kept throwing console.log everywhere and skipping JSDoc. Now it follows my actual coding standards from the first prompt. The 30-minute investment of writing this file pays back on literally every session. Two things I learned the hard way: keep it under 200 lines (Claude starts ignoring longer files), and run /init first to let it auto-generate a baseline — then edit from there. What's in your CLAUDE.md? Drop your best rule below — I'm always looking to steal good patterns. #ClaudeCode #NestJS #AITools #BackendEngineering #DeveloperProductivity #NodeJS #ExpressJS #TypeScript
To view or add a comment, sign in
-
More from this author
Explore related topics
- Key Principles for Building Robust APIs
- How to Understand API Design Principles
- Creating User-Friendly API Endpoints
- Writing Clean Code for API Development
- How to Build a Strong AI Infrastructure
- How to Build Intelligent Rag Systems
- How to Build APIs With Graphql
- Best Practices for Designing APIs
- How to Optimize API Communication
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development