Day 7 - I built a live Weather API in Python. It runs for free. Forever. ⠀ 🚀TechFromZero Series - FastAPIFromZero ⠀ This isn't a Hello World. It's a real async REST API with a cloud database: 📐 OpenWeather → FastAPI (Render) → motor → MongoDB Atlas (free) ⠀ 🌐 Try it live: https://lnkd.in/dfrggisJ ⠀ 🔗 The full code (with step-by-step commits you can follow): https://lnkd.in/d8S2AmcP ⠀ 🧱 What I built (step by step): 1️⃣ Project scaffold — venv, requirements.txt, .env.example 2️⃣ FastAPI app — CORS, lifespan hook, health check endpoint 3️⃣ Async MongoDB connection — motor client, shared connection pool 4️⃣ Pydantic schemas — auto-validate input, auto-generate Swagger docs 5️⃣ OpenWeatherMap client — async httpx fetch, clean error handling 6️⃣ Weather endpoints — POST /weather, GET /weather, filter by city, delete, aggregation stats 7️⃣ Render deploy config — render.yaml, $PORT, env vars as code 8️⃣ README — full beginner guide with architecture diagram ⠀ 💡 Every file has detailed comments explaining WHY, not just what. Written for any beginner who wants to learn FastAPI by reading real code — with full clarity on each step. ⠀ 👉 If you're a beginner learning FastAPI, clone it and read the commits one by one. Each commit = one concept. Each file = one lesson. Built from scratch, so nothing is hidden. ⠀ 🔥 This is Day 7 of a 50-day series. A new technology every day. Follow along! ⠀ 🌐 See all days: https://lnkd.in/dhDN6Z3F ⠀ #TechFromZero #Day7 #FastAPI #Python #MongoDB #MongoDBAtlas #Render #AsyncPython #LearnByDoing #OpenSource #BeginnerGuide #100DaysOfCode #CodingFromScratch
Building a Live Weather API with Python and FastAPI
More Relevant Posts
-
🚀 Just finished a major milestone on EstateFlow AI — my automated real estate analysis platform built with Python, Node.js, and PostgreSQL. One of the biggest lessons? Database constraints will humble you fast. I spent hours tracking down a Foreign Key violation that produced zero runtime errors but was silently writing corrupt data into my tables. That kind of silent failure is far more dangerous than a crash — and finding it taught me more about data integrity than any textbook chapter. The platform scrapes live property listings, cleans the data through a Python pipeline, and runs it through custom evaluation logic I built to assess investment viability by price, location, and size. Still a work in progress, but the architecture is solid. GitHub link in my featured section if you want to take a look. Always building. 🛠️ #Python #NodeJS #PostgreSQL #BackendDevelopment #SoftwareEngineering #Caribbean #NCU #StudentDeveloper
To view or add a comment, sign in
-
PydanTable 1.17.0 has been released, and MongoDB is now officially part of the story. This release introduces an optional MongoDB execution engine, allowing work to remain on the MongoDB database side when supported. This means you can materialize data only when it is actually needed in the application, rather than pulling full result sets into Python first. Additionally, this version adds integration with Beanie, a popular Python ODM (object-document mapper) for MongoDB built on Pydantic. If your application already models MongoDB documents with Beanie, PydanTable can seamlessly integrate with that layer, ensuring your document models and typed, table-shaped workflow remain aligned without the need for a parallel schema. For more details, check out the documentation and release notes: - PyPI: https://lnkd.in/ez4NZMjT - Documentation: https://lnkd.in/eV4RTqZQ - Repository: https://lnkd.in/eVpjrcRX #Python #Pydantic #MongoDB #DataEngineering #OpenSource
To view or add a comment, sign in
-
Python for AI Systems: Why Python + FastAPI is my default for AI backend services in 2025. I've built backends in Java (Spring Boot), PHP (Laravel), Node.js, and Python. Here's when I reach for each: For AI/LLM workloads → Python + FastAPI. Always. Here's why: FastAPI is genuinely fast-: Async by default, built on Starlette. Handles concurrent LLM calls without thread management headaches. AI ecosystem lives in Python: LangChain, LangGraph, OpenAI SDK, HuggingFace — all Python first. No wrappers, no translation layers. Pydantic = free input validation: Define your schema once, get validation + docs + serialization. Critical when LLM outputs need strict structure. Background tasks built-in: Streaming LLM responses + async background processing without a separate worker framework. Easy integration with data tools: Pandas, Airflow, SQLAlchemy — your AI service can talk to your data layer without impedance mismatch. Java Spring Boot is still my go-to for transactional enterprise systems. But for AI services? FastAPI + Python + Docker on AWS ECS = fastest path to production-ready AI endpoints. What's your preferred stack for AI backend services? #Python #FastAPI #LLM #AIEngineering #BackendDevelopment #AWS
To view or add a comment, sign in
-
Every Python dev on AWS runs into this at some point: Three Lambda functions, all sharing the same Pydantic models. You copy-paste once. Then twice. Then you spend a Tuesday figuring out why one function is missing a field. Google it, and every article from 2019 gives the same answer: "Just use Lambda Layers!" But Lambda Layers are not a package manager. Yan Cui said it well back in 2021: "Lambda Layer is a poor substitute for existing package managers." No IDE autocomplete, no proper versioning, and if someone deletes a layer version, your deploys break until you fix the ARN. There is a cleaner way: uv workspaces with local packages, bundled into self-contained ZIPs by CDK. Each function gets only what it needs. Normal Python packaging, no AWS-specific workarounds. Full blog post with the CDK construct (Demo GitHub Repository Link included): 👉 https://lnkd.in/d2A8AqAv PS: One thing I want to mention regarding my blog posts in general: I don't want to write another "Hello World with AWS Lambda" tutorial. There are enough of those. What I find interesting are the edge cases - things that are barely documented but matter a lot once you run real workloads in production. #AWS #Python #Serverless #CDK
To view or add a comment, sign in
-
Most codebases are still navigated with grep… and a lot of guesswork. When you jump into a new repo—at work or in open source—you’re often stuck: → Guess a keyword → Open a file → Repeat… across 50 tabs It’s slow, frustrating, and breaks your flow. I got tired of that. So I built something to change it. Meet Ziv — a local semantic search tool for Python codebases. Instead of guessing keywords, you just ask: → ziv search "where is request context handled?" → ziv search "session management" Ziv understands intent and returns the most relevant files—ranked by meaning, not exact matches. No noise. No digging. Just answers. Under the hood: • FAISS for fast similarity search • all-MiniLM-L6-v2 (ONNX) for embeddings • Optimized to run efficiently on CPU And everything stays local: • No cloud • No API keys • No code leaving your machine It’s open source. It’s free. And it’s now in public beta. If you work with Python or contribute to open source, I’d love for you to try it. If it breaks, open an issue. If it helps, a ⭐ on GitHub would mean a lot right now. Links are in the comments 👇 #Python #OpenSource #DevTools #BuildInPublic #MachineLearning
To view or add a comment, sign in
-
-
I've been building a vector database from scratch in Python. Not a wrapper. Not a tutorial follow along, the actual thing From storage, indexing, and durability layers, all written by hand. It's called VectKV. Here's what's in it so far: • HNSW indexing (same algorithm Chroma and Qdrant use under the hood) • Write-ahead log for crash recovery, every mutation is persisted before it hits memory (still need some refurbishing , i can't lie😂) • Page-based namespacing so each collection has its own independent index • Two-level async locking, global for structural operations, per-page for vector writes • FastAPI HTTP layer with Pydantic validation throughout I started this project because I wanted to actually understand how vector databases work, not just use them. The WAL pattern I landed on mirrors what Kafka, RocksDB, and etcd do. I derived it from first principles before I realized that. That was a good day 😄. This is the first post in a series. I'll be building in public as VectKV grows, snapshots, WAL compaction, delete support and authentication. If you're curious about database internals, vector search, or just enjoy watching something get built from zero, please follow along. GitHub: https://lnkd.in/eHypbVjv #Python #VectorDatabase #BuildingInPublic #DatabaseInternals #MachineLearning #OpenSource
To view or add a comment, sign in
-
New blog 🎉 Why Apache Fluss Chose Rust for Its Multi-Language SDK Why build one core instead of three divergent clients? 🤔 If you've ever seen non-JVM teams forced to reach for sidecars, wrappers, or half-finished clients just to talk to a data system, this one's for you. A deep dive into how Apache Fluss (Incubating) built a shared Rust core for Rust, Python, and C++. 🦀 We unpack the architecture behind fluss-rust: one implementation of protocol, batching, retries, Arrow, and idempotence, with thin bindings on top. The result is less drift, fewer edge-case bugs, and a much cleaner path for multi-language access to real-time storage. ⚙️ 📦 ⚡ We also go into why Rust beat C for this job, how async complexity stays inside the core, and why Arrow makes zero-copy interop across languages actually practical. Plus: where this goes next - DataFusion, Go, AI agents, and a multi-protocol gateway. 🚀 📕 Read: https://lnkd.in/e9uGKKBy 💬 Comments welcome #ApacheFluss #Rust #Python #Cpp #Arrow #DataEngineering #Streaming #RealTimeAnalytics #AI #OpenSource
Why Apache Fluss Chose Rust for Its Multi-Language SDK | Apache Fluss™ (Incubating) fluss.apache.org To view or add a comment, sign in
-
I'm tried Mempalace, the memory system launched by Milla Jovovich, yes you read right, Milla the Fifth Element, Milla the Resident Evil hero. The model mimics the way human mind stores information, by generalizing, characterizing and associating concepts, creating taxonomies on the fly and represent them as graphs. Not a new Idea at all but, in the case of Milla's project implemented is it brought to real world with high efficiency, accuracy and usability. The system is reportedly beating records in standard tests and it is scoring high in GitHub downloads. Better news are, that is is open source and runs locally. Setting up is really easy, first you pip-install it (is python as you can see) then you run commands to allow it to read your projects. This basically ingest all of your project files, building a taxonomy of concepts out of them and store the in the local database. An that's it, second step is more interesting. Then you run codex indicating that there's a local mcp server it can use (yes Mempalace runs a local mcp server), codex mcp add mempalace -- python -m mempalace.mcp_server - First I checked if codex is actually connected to the mcp: "Are you connected to some mcp and if so what tools are exposed?" Codex showed me the Mempalace mcp and all its tools. Then I asked codex about some concept I now is present in my project and he should know about it. I have a multi-project workspace fully controlled by codex, with AGENTS.md/README.md files at each level and more ai-targeted documentation. The response was successful, but in the commands codex rans I didn't see any reference of calling MCP server tools. I uses the context it already has from ai-targeted files to build its response. Then I asked explicitly to codex to search using the mcp tools. So it did, and the response was also good. After this I instructed codex to remember its response: In the commands it rans I could see how the concepts were being stored in Mempalace, so I was sure it was using Mempalace to store information. Then I asked: How can I make you always use Mempalace as main source to build your context. Codex bassically responded that I could specifically add it to the root AGENTS.md file, this is the content it added. ## Memory Source Priority - For project-context questions, query MemPalace MCP before searching the codebase or the web. - Treat MemPalace as the first source of stored project memory, including prior summaries, decisions, and indexed notes. - If MemPalace returns relevant results, use that context to guide subsequent file inspection and implementation. - If MemPalace does not contain enough useful context, fall back to local file inspection, then web search when needed. After that, I observed in every questions I made to context a query to Mempalace. So that's my experience using Mempalace from the beautiful Milla Jovovich so far. Mempalace repo: https://lnkd.in/deBQg82K
To view or add a comment, sign in
-
-
GoScrapy is a high-performance web scraping framework for Go, designed with the familiar architecture of Python's Scrapy. It provides a robust, developer-centric experience for building sophisticated data extraction systems, purposefully crafted for those making the leap from Python to the Go ecosystem. Why GoScrapy? While low-level scraping libraries are powerful, many teams require the high-level architectural framework established by Scrapy. GoScrapy brings this architectural discipline natively to Go, organizing your request callbacks, middlewares, and pipelines into a structured, manageable workflow. Instead of manually orchestrating retries, cookie isolation, or database handoffs, GoScrapy provides the engine that powers your spiders. You focus purely on the extraction logic; the framework manages the high-throughput lifecycle and concurrency in the background.
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development