🚀 𝐃𝐚𝐲 𝟐: 𝐌𝐚𝐬𝐭𝐞𝐫𝐞𝐝 𝐑𝐨𝐮𝐭𝐢𝐧𝐠 & 𝐏𝐚𝐭𝐡 𝐏𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫𝐬 𝐢𝐧 𝐅𝐚𝐬𝐭𝐀𝐏𝐈! ⚡ The journey into FastAPI continues! Today was all about how we handle data directly within the URL. Coming from a Django background, I’m loving how clean and intuitive the routing feels here. 𝙃𝙚𝙧𝙚’𝙨 𝙬𝙝𝙖𝙩 𝙄 𝙩𝙖𝙘𝙠𝙡𝙚𝙙 𝙩𝙤𝙙𝙖𝙮 : 📍 𝙋𝙖𝙩𝙝 𝙋𝙖𝙧𝙖𝙢𝙚𝙩𝙚𝙧𝙨 & 𝙃𝙏𝙏𝙋 𝙈𝙚𝙩𝙝𝙤𝙙𝙨 : I explored how to capture dynamic values from the URL using {curly_brackets} and how they interact with standard HTTP methods like GET and POST. 🔢 𝙋𝙖𝙩𝙝 𝙋𝙖𝙧𝙖𝙢𝙚𝙩𝙚𝙧𝙨 𝙬𝙞𝙩𝙝 𝙏𝙮𝙥𝙚𝙨 : This is a game-changer! By using Python type hints (like : int or : str), FastAPI automatically handles: 𝗗𝗮𝘁𝗮 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻:It returns a clear error if the wrong type is sent. 𝐃𝐚𝐭𝐚 𝐂𝐨𝐧𝐯𝐞𝐫𝐬𝐢𝐨𝐧: It automatically converts the URL string into the correct Python type. 🔄 𝘿𝙤𝙚𝙨 𝙊𝙧𝙙𝙚𝙧 𝙈𝙖𝙩𝙩𝙚𝙧? (𝙋𝙖𝙩𝙝 𝙋𝙖𝙧𝙖𝙢𝙚𝙩𝙚𝙧 𝙊𝙧𝙙𝙚𝙧𝙨) : I learned that in FastAPI, the order of your route functions matters. If you have a static path like /users/me and a dynamic path like /users/{user_id}, the static one must come first to avoid being "caught" by the dynamic parameter! 📋 𝙋𝙧𝙚𝙙𝙚𝙛𝙞𝙣𝙚𝙙 𝙑𝙖𝙡𝙪𝙚𝙨 : Using Python’s Enum, I learned how to restrict a path parameter to a specific set of valid options. This makes APIs incredibly robust and self-documenting. 🛠️ 𝙋𝙖𝙩𝙝 𝘾𝙤𝙣𝙫𝙚𝙧𝙩𝙚𝙧𝙨 : I dived into using :𝗽𝗮𝘁𝗵 to capture entire file paths (like files/images/photo.jpg) within a single parameter. 𝐂𝐮𝐫𝐫𝐞𝐧𝐭 𝐒𝐭𝐚𝐭𝐮𝐬:Feeling more confident with every line of code. The way FastAPI handles documentation and validation simultaneously is a massive productivity boost! 🛠️💻 #FastAPI #Python #BackendDevelopment #WebAPI #LearningJourney #Coding #SoftwareEngineering #PythonDeveloper #Day2
FastAPI Routing & Path Parameters in FastAPI
More Relevant Posts
-
Just shipped memweave v0.2.0 — and the biggest addition is a CLI. 🖥️ The Python API was always the core, but a lot of agent workflows live outside Python — shell scripts, CI pipelines, subprocess tool calls. This is particularly useful for: 🔎 Inspecting agent memory without opening a Python REPL — browse what's indexed, check scores, read snippets directly in the terminal. ⚙️ Shell scripts and CI pipelines — index a workspace after a build, search for a known fact and fail the pipeline if it isn't there, or export results as JSON for downstream tools. 🤖 Agents that orchestrate subprocesses — an LLM running a bash tool can call memweave search and parse the JSON output without embedding the library. Now you can index and search your agent's memory from anywhere: memweave index --workspace ./project --embedding-model text-embedding-3-small memweave search "which database did we pick?" --workspace ./project --json Five commands in total: index, add, files, search, stats. The --json flag is what makes it composable — pipe results into jq, call it from any language, or wire it up as an MCP tool so an agent can query its own memory as a native tool call. Everything stays local. No server, no cloud service — just a SQLite file on disk and plain Markdown files you can git diff. 🎥 Short demo in the video below — index a workspace, list every file currently tracked in the index (with its source label, chunk count, and whether it is evergreen), run a search, see ranked results with scores and answer sources. Full documentation + repo in the comments. #AI #AIAgents #Python #agenticmemory
To view or add a comment, sign in
-
Scraped insight, one page at a time 🧠💡 I recently worked on a small but satisfying project: extracting quotes tagged with “life” from the website quotes.toscrape.com using Python. Here’s what I explored: 🔹 Automated pagination with requests 🔹 Parsed HTML using BeautifulSoup 🔹 Filtered content based on specific tags 🔹 Structured the extracted data into a clean pandas DataFrame Instead of manually browsing pages, the script loops through all available pages, identifies quotes associated with the life tag, and stores both the quote and its author. Once no more pages are found, it neatly compiles everything into a dataset. This project reinforced how powerful web scraping can be for: ✔️ Data collection ✔️ Content analysis ✔️ Building datasets from unstructured sources Simple problem, clean solution, and a great reminder that automation saves time and effort. #Python #WebScraping #BeautifulSoup #DataScience #Automation #LearningByDoing
To view or add a comment, sign in
-
Most FastAPI tutorials stop at: -- pip install fastapi But real-world projects don’t. While working on backend projects, I kept running into: - dependency chaos - environment configuration issues - messy secret management So I explored a better approach using uv and put together a write-up: FastAPI Beyond pip - Master Secrets & Envs with uv In this article, I cover: • a cleaner way to manage dependencies • handling environment variables properly • structuring FastAPI projects more like production systems Here’s the full breakdown: https://lnkd.in/eaU8SFCV I’m currently exploring opportunities and always open to connecting with others working in backend and Python. How are you handling environments and secrets in your projects? #FastAPI #Python #BackendDevelopment #DevTools #OpenToWork #SoftwareEngineering
To view or add a comment, sign in
-
Same condition. Same variables. Different result… depending on how you write it. 🤯 This is where Python stops being “easy” and starts being precise. 🧠 Today’s concept: Truthiness, Short-Circuiting & Operator Precedence Three small ideas. Massive impact. # 1. Truthiness (Not just True/False) data = [] if data: print("Has data") else: print("Empty ❌") 👉 Empty values ([], {}, "", 0, None) are False 👉 Everything else is True # 2. Short-Circuiting (Python stops early) def check(): print("Checking...") return True result = False and check() print(result) 👉 Output: False 👉 check() NEVER runs Because: False and anything → already False Python doesn’t evaluate further # 3. OR short-circuit behavior def fallback(): print("Fallback executed") return "Default" value = "Data" or fallback() print(value) 👉 Output: "Data" 👉 fallback() NEVER runs Because: True or anything → already True # 4. Operator Precedence (Silent bugs ⚠️) a = True b = False c = False result = a or b and c print(result) 👉 Output: True Because Python reads it as: a or (b and c) NOT: (a or b) and c ⚠️ Real-world bug pattern # Looks correct, but isn't if user == "admin" or "manager": print("Access granted") 👉 ALWAYS True ❌ Correct way: if user == "admin" or user == "manager": 💡 Advanced takeaway: and → returns first False or last True value or → returns first True value Conditions don’t always return True/False—they return actual values #Python #AdvancedPython #CodingJourney #LearnInPublic #100DaysOfCode #SoftwareEngineering #Debugging #TechSkills
To view or add a comment, sign in
-
I am happy to share 🥳🥳🥳 🚀 Just shipped my first open-source Python library: pyctxlog Ever tried tracing a single request across 40 log lines and given up? That's the problem I built this for. pyctxlog is a tiny decorator that auto-tags every log line inside a function with per-call context — request id, job name, tenant, whatever you want — using contextvars so it works correctly across threads and async tasks. @log_context(fields={"job": "ingest_orders"}) def run_ingest(batch_id): log.info(f"processing {batch_id}") # every line inside is auto-tagged with job + id ✅ Sync + async auto-detected ✅ Works with Django, FastAPI, Celery, or plain functions ✅ Zero framework assumptions — truly generic ✅ Python 3.9+, MIT licensed pip install pyctxlog 🔗 https://lnkd.in/dx9HpvXt 🐙 https://lnkd.in/df4rAkR4 Feedback very welcome — this is v0.1.0 and I'd love to hear what you'd want in v0.2. #Python #OpenSource #Logging #Observability #SoftwareEngineering
To view or add a comment, sign in
-
-
How do you handle GET requests in your DRF projects? I have noticed a common point of confusion among Django developers whether to structure response data directly in the view or use a serializer. Here is my take. Always use serializers, even for read only operations. By default, read only fields in a serializer give you a clean declarative way to shape your API responses. Instead of manually reshaping dictionaries inside the view which quickly becomes unmaintainable, serializers act as a contract between your database models and the outside world. They allow you to rename fields conditionally, expose computed properties, nest related objects, and keep your views lean and focused on orchestration rather than transformation. But there is one critical performance caveat. If your serializer pulls data from multiple related objects, make sure you use prefetch related or select related in your queryset before passing it to the serializer. Otherwise you will run into the classic N plus one query problem, one query for the main object plus one query for each related object. That scales terribly. Good serialization is about control over your data shape. Good performance is about intention in your query planning. Do you structure your GET responses in serializers or directly in the view? What is your team's standard? #Django #DRF #APIDesign #Python #WebDevelopment #BackendBestPractices
To view or add a comment, sign in
-
-
🐍 Simplifying Web Data Extraction with BeautifulSoup Recently, I explored how to use BeautifulSoup to quickly extract structured data from websites—and it’s one of the easiest ways to get started with web scraping. Here’s a simple approach: 🔹 Send a request to a webpage using Python 🔹 Parse the HTML content using BeautifulSoup 🔹 Locate elements (tags, classes, IDs) 🔹 Extract useful data (text, links, prices, etc.) 🛠 Tools Used: • Python • BeautifulSoup • Requests library 💡 Key Takeaway: With just a few lines of code, you can turn unstructured web pages into usable datasets—perfect for building data-driven apps, research tools, or automation workflows. ⚠️ Always respect website terms and use scraping responsibly. A great starting point for anyone getting into data extraction and automation. #Python #WebScraping #BeautifulSoup #DataEngineering #Automation #OpenSource
To view or add a comment, sign in
-
Day 15/365: Merging Two Dictionaries with Summed Values in Python 🧮🔗 Today I worked on a very common real-world task: merging two dictionaries where overlapping keys should have their values added together. 🧠 What this code does: I start with two dictionaries: d1 = {1: 10, 2: 20, 3: 30} d2 = {3: 40, 5: 50, 6: 60} Each key can represent something like: a product ID with its total sales, a student ID with total marks, a user ID with total points. The goal is to combine d2 into d1: If a key from d2 already exists in d1, I add the values. If the key doesn’t exist in d1, I insert it. Step by step: I loop over each key i in d2: for i in d2: For each key: If i is already a key in d1: I update d1[i] by adding d2[i] to it. Otherwise: I create a new entry in d1 with that key and its value from d2. After the loop finishes, d1 contains the merged result. For the given dictionaries: Key 3 exists in both, so its values are added: 30 + 40 = 70. Keys 5 and 6 only exist in d2, so they are added as new keys. Final output: {1: 10, 2: 20, 3: 70, 5: 50, 6: 60} 💡 What I learned: How to merge two dictionaries manually using a loop and conditions. How to update values in a dictionary when keys overlap. How this pattern appears in real data tasks like: combining monthly reports, merging user activity stats, aggregating counts from multiple sources. Next, I’d like to explore: Handling much larger dictionaries efficiently. Using dictionary methods like update() or Counter from collections to compare approaches. Trying the same logic with string keys (like product names) instead of numbers. Day 15 done ✅ 350 more to go. Got any other dictionary + loop problems (like counting frequencies from multiple sources or merging configs)? Drop them in the comments—I’d love to try them next. #100DaysOfCode #365DaysOfCode #Python #Dictionaries #DataStructures #LogicBuilding #CodingJourney #LearnInPublic #AspiringDeveloper
To view or add a comment, sign in
-
-
Day 10/365: Building a List from User Input & Finding Basic Stats 🔢📥 Today I wrote a Python program that takes numbers from the user, stores them in a list, and then calculates some basic statistics: sum, average, minimum, and maximum. What the code does step by step: First, I ask the user how many elements they want to enter and store that in n. I create an empty list l and a variable total to keep track of the sum. Using a for loop, I take n inputs from the user: Each number is added to the list using append(). At the same time, I keep adding each number to total to calculate the sum. After the loop: I print the full list. I print the sum using the total variable. Then I calculate the average as total / n and print it. To find the minimum and maximum: I start by assuming both min and max are the first element of the list. I loop through the list and update min if I find a smaller value. Similarly, I update max if I find a larger value. In the end, I print the minimum and maximum numbers in the list. What I learned from this exercise: How to take multiple inputs from a user and store them in a list. How to maintain a running sum while taking inputs. How to manually compute average, minimum, and maximum without using built‑in functions like sum(), min(), or max(). How loops and variables can work together to build simple but useful statistics — a basic idea used a lot in data analysis. Day 10 done ✅ 355 more to go. If you have ideas like extending this to find median, mode, or standard deviation, send them to me — I’d love to try them next. #100DaysOfCode #365DaysOfCode #Python #LogicBuilding #Lists #UserInput #CodingJourney #LearnInPublic #AspiringDeveloper
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development