While working with databases in FastAPI, one small feature saved me a lot of time and effort: using dictionaries to handle data efficiently. Instead of manually writing each column while creating a new entry in the database, you can simply use: new_post = Post(**post.dict()) Here, post represents the request body (Pydantic model), and ".dict()" converts all the fields into a dictionary. This dictionary is then unpacked directly into the database model. Why is this useful? • No need to manually map each field • Cleaner and more readable code • Reduces chances of missing or incorrect fields • Speeds up development This approach becomes especially powerful when your table (like "post") has multiple columns. Rather than repeating yourself, you let Python handle it smartly. Small optimizations like this make backend development more efficient and enjoyable 🚀 #FastAPI #Python #BackendDevelopment #APIs #LearningJourney
Optimize Backend Development with Dictionaries in FastAPI
More Relevant Posts
-
Technical post: I've been posting some graphs on here, talking about functions and "equivalence". This was all started by working on porting an MLOPs framework from python 3.10 to 3.12, and all the "dependency hell" one has to go through. Then naturally the question arose "What are the boundaries of one project to another, in terms of functions being called etc.,?" This led me down the rabbit hole (not too deep) of what happens when I do something like python -m <module> <somescript>. Specifically, what is a "no op" module, and what kind of ops can we inject, thanks to python being an interpreted language. A few years ago I'd worked on something along similar lines called TracePath, which provided a decorator to do something similar (e.g. who called who, how long it took, etc.). So I merged these two ideas (avoid decorating every function, have an "inspector" module) and ran this on a simple pandas dataframe creation. The resulting function invocation graph is the image attached to this post. When I ran it across the whole workflow (create, load, transform data etc.,), the graph had ~9000 connections. The nice thing is I can specify which modules (e.g. only pandas, or pandas and numpy) should be added to the graph etc. What do you think is the next logical thing to do with something like this? What kind of graphs would well structured software produce? How about badly written software? #graphs #swe #dependencyhell #python
To view or add a comment, sign in
-
-
🚀 Day 5/30 of My LeetCode Journey (Python + SQL) Staying consistent and pushing forward every single day! 💻🔥 🔹 **SQL Problem of the Day** 👉 *Customers Who Never Order* Given two tables `Customers` and `Orders`, write a query to find all customers who never placed any order. 💡 *Key Concept:* LEFT JOIN + filtering NULL values to identify missing relationships. 🔹 **Python Problem of the Day** 👉 *Search Insert Position* Given a sorted array of distinct integers and a target value, return the index if found. If not, return the index where it would be inserted. 💡 *Key Concept:* Binary Search (O(log n)) for efficient searching. Understanding patterns like joins and binary search is making problem-solving faster and cleaner 📈 Day 5 done ✅ Let’s keep going! #LeetCode #30DaysChallenge #Python #SQL #CodingJourney #Consistency #BinarySearch #ProblemSolving
To view or add a comment, sign in
-
Today's topic: recursion. A function that calls itself. Sounds simple, right? Here are two ways to add up a list of numbers: Without recursion — honest, reliable, easy to follow: python def suma(lista): suma = 0 for i in range(0, len(lista)): suma = suma + lista[i] return suma print(suma([6,3,4,2,10])) # 25 With recursion — elegant, almost poetic... and a little terrifying: python def suma(lista): if len(lista) == 1: return lista[0] else: return lista[0] + suma(lista[1:]) print(suma([6,3,4,2,10])) # 25 Same result. Two completely different roads to get there. The recursive version looks more "pro" — but if you forget to define when it stops, the function calls itself forever. Literally. Forever. 💀 So yes, it's getting challenging. And yes, recursion feels more elegant to write. But I'm not ready to fully trust something that could loop into oblivion if I blink wrong. Lesson of the day: simple is not the same as bad. And documenting the moments that confuse you? That's part of learning too. #Python #LearningToCode #DaysOfCode #PythonProgramming #CodingJourney #Recursion #BeginnerCoder #TechLearning #CodeNewbie #LinkedInLearning
To view or add a comment, sign in
-
-
🚀 Day 13/30 of My LeetCode Journey (Python + SQL) Showing up every day and pushing my limits a little more! 💻🔥 🔹 SQL Problem of the Day 👉 Customer with Most Orders Given an Orders table, write a query to find the customer_number who has placed the highest number of orders. 💡 Key Concept: GROUP BY + COUNT() with ordering/aggregation to find maximum. 🔹 Python Problem of the Day 👉 Subarray Sum Equals K Given an array and an integer k, return the total number of subarrays whose sum equals k. 💡 Key Concept: Prefix Sum + HashMap for optimizing from O(n²) → O(n). Learning how to optimize brute force solutions into efficient ones is a big win ⚡ Day 13 done ✅ #LeetCode #30DaysChallenge #Python #SQL #CodingJourney #Consistency #ProblemSolving #PrefixSum #Learning
To view or add a comment, sign in
-
Most “slow APIs” in Python aren’t CPU-bound. They’re blocking the event loop without realizing it. Classic FastAPI mistake: @app.get("/users") async def get_users(): users = db.fetch_all() # blocking call return users Looks async. Isn’t. Result: * event loop stalls * requests queue up * latency spikes under load Fix → respect async boundaries @app.get("/users") async def get_users(): users = await db.fetch_all() return users Or offload properly: from asyncio import to_thread users = await to_thread(sync_db_call) Advanced production pattern: * separate sync + async layers clearly * use connection pools (asyncpg, aiomysql) * never mix blocking ORM calls inside async routes Hidden issue: One blocking call can freeze thousands of concurrent requests. Build-in-public lesson: Async isn’t about syntax. It’s about protecting the event loop at all costs. AI can convert code to async— but only experience catches where it’s still secretly blocking. #Python #BackendEngineering #FastAPI #Scalability #SystemDesign
To view or add a comment, sign in
-
Day 10/365: Building a List from User Input & Finding Basic Stats 🔢📥 Today I wrote a Python program that takes numbers from the user, stores them in a list, and then calculates some basic statistics: sum, average, minimum, and maximum. What the code does step by step: First, I ask the user how many elements they want to enter and store that in n. I create an empty list l and a variable total to keep track of the sum. Using a for loop, I take n inputs from the user: Each number is added to the list using append(). At the same time, I keep adding each number to total to calculate the sum. After the loop: I print the full list. I print the sum using the total variable. Then I calculate the average as total / n and print it. To find the minimum and maximum: I start by assuming both min and max are the first element of the list. I loop through the list and update min if I find a smaller value. Similarly, I update max if I find a larger value. In the end, I print the minimum and maximum numbers in the list. What I learned from this exercise: How to take multiple inputs from a user and store them in a list. How to maintain a running sum while taking inputs. How to manually compute average, minimum, and maximum without using built‑in functions like sum(), min(), or max(). How loops and variables can work together to build simple but useful statistics — a basic idea used a lot in data analysis. Day 10 done ✅ 355 more to go. If you have ideas like extending this to find median, mode, or standard deviation, send them to me — I’d love to try them next. #100DaysOfCode #365DaysOfCode #Python #LogicBuilding #Lists #UserInput #CodingJourney #LearnInPublic #AspiringDeveloper
To view or add a comment, sign in
-
-
I am happy to share 🥳🥳🥳 🚀 Just shipped my first open-source Python library: pyctxlog Ever tried tracing a single request across 40 log lines and given up? That's the problem I built this for. pyctxlog is a tiny decorator that auto-tags every log line inside a function with per-call context — request id, job name, tenant, whatever you want — using contextvars so it works correctly across threads and async tasks. @log_context(fields={"job": "ingest_orders"}) def run_ingest(batch_id): log.info(f"processing {batch_id}") # every line inside is auto-tagged with job + id ✅ Sync + async auto-detected ✅ Works with Django, FastAPI, Celery, or plain functions ✅ Zero framework assumptions — truly generic ✅ Python 3.9+, MIT licensed pip install pyctxlog 🔗 https://lnkd.in/dx9HpvXt 🐙 https://lnkd.in/df4rAkR4 Feedback very welcome — this is v0.1.0 and I'd love to hear what you'd want in v0.2. #Python #OpenSource #Logging #Observability #SoftwareEngineering
To view or add a comment, sign in
-
-
🚀 Last month, I built and published my first Python package — Pristinizer I wanted to solve a simple but real problem in data science: 👉 Cleaning and understanding raw datasets takes way too much time. So I built Pristinizer, a lightweight Python package that helps streamline data cleaning + EDA in just a few lines of code. 🔍 What Pristinizer does: • Cleans messy datasets (duplicates, missing values, column formatting) • Generates structured dataset summaries • Visualizes missing data (heatmap, matrix, bar chart) ⚙️ Tech Stack: Python • pandas • matplotlib • seaborn 📦 Try it out: >> pip install pristinizer >> import pristinizer as ps df = ps.clean(df) ps.summarize(df) ps.missing_heatmap(df) 🧠 What I learned while building this: • Designing a clean and intuitive API • Structuring a real-world Python package • Publishing to PyPI • Writing proper documentation for users 📌 Next, I’m planning to add: • Outlier detection • Automated preprocessing pipelines • Advanced EDA reports Would love to hear your thoughts or feedback! #Python #DataScience #MachineLearning #OpenSource #Pandas #EDA #Projects
To view or add a comment, sign in
-
-
Built a quick little project this week: justaskit The idea was simple, most data tools make you learn SQL just to ask basic questions. So I made one where you just... ask. In plain English. Upload a CSV, type "show me top 3 products by revenue" and it spits out a chart with an explanation in about 8 seconds. Under the hood it's a multi-agent system with LangGraph where separate agents handle the analysis, visualization, and insights. Added full code transparency too so you can see exactly what it's doing. Stack: Python, FastAPI, Next.js 15, LangGraph, pandas GitHub link in the comments if you want to check it out! #AI #OpenSource #LangGraph #Python #BuildInPublic
To view or add a comment, sign in
-
-
Mastering Data Ingestion: Why NumPy is the Standard For anyone working with numerical data in Python, the transition from built-in functions to NumPy is a game-changer. While Python’s open() function handles basics, NumPy arrays offer a level of efficiency and speed that standard lists simply cannot match. Why use NumPy for flat files? The Industry Standard: NumPy arrays are the backbone of the Python data ecosystem. Essential for ML: If you plan to use libraries like scikit-learn, your data needs to be in a NumPy format. Built-in Efficiency: Functions like loadtxt() and genfromtxt() make importing arrays seamless. Pro-Tips for np.loadtxt() When importing data, the real power lies in the customization arguments: delimiter: Remember that the default is whitespace. For CSVs, always specify delimiter=','. skiprows: Perfect for bypassing headers (e.g., skiprows=1) so string labels don't break your numerical array. usecols: Optimization starts at ingestion. Only grab what you need by passing a list of indices, like usecols=[0, 2]. dtype: Control your data types from the start (e.g., dtype='str'). The Catch While loadtxt() is excellent for clean, uniform datasets, it hits a wall with mixed data types (like the Titanic dataset). When your columns vary between strings and floats, it’s time to level up to genfromtxt() or move into the world of Pandas. #DataEngineering #python #Numpy #Learninginpublic
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development