While transforming my API from synchronous to asynchronous, I ran into an error: author Input should be a valid dictionary or instance of UserOut While debugging the issue, I discovered something interesting — it was related to lazy loading in SQLAlchemy. By default, relationships in SQLAlchemy are lazy-loaded, meaning related objects like author or comments are not fetched until they are accessed. When my Pydantic schemas tried to serialize the response, it threw an error because those related objects had not been accessed or loaded yet. This error pushed me to explore more about lazy loading and eager loading. Lazy loading fetches only the main object in the initial query, while eager loading fetches the related objects as well. During this process, I also learned about the N+1 query problem and how loading strategies can impact an API’s performance. I resolved the issue by using selectinload() in my query, although joinedload() can also be used depending on the situation. What started as a confusing error ended up becoming a great learning experience about how ORMs fetch data and why controlling loading strategies is important when building APIs. #FastAPI #Python #BackendDevelopment
Resolving SQLAlchemy Lazy Loading Error in FastAPI
More Relevant Posts
-
One of the most common FastAPI mistakes is reusing SQLAlchemy models as Pydantic response schemas. This doesn't just couple your layers—it makes it easy to accidentally expose fields like hashed passwords. In my latest post, I share 8 Sections of battle-tested patterns for structuring your FastAPI applications: 1️⃣ Centralized Config: Using pydantic-settings instead of scattered os.environ calls. 2️⃣ Dependency Injection: Leveraging Depends() for cleaner auth and DB sessions. 3️⃣ Business Logic: Moving core logic into pure Python service functions for easier unit testing. 4️⃣ Error Handling: Using app-level exception handlers to remove "try/except" noise from routes. Swipe through the PDF for the code examples and folder structures! ➡️ #FastAPI #Python #BackendDevelopment #WebDevelopment #CleanCode
To view or add a comment, sign in
-
Topic 12/100 🚀 🧠 Topic 12 — Mixins Ever wanted to add specific features to a class without creating a messy, deep inheritance tree? 🧬 That’s where Mixins come in. 👉 What is it? A Mixin is a specialized type of multiple inheritance. It’s a class that provides methods to other classes but isn't meant to stand on its own. Think of it as a "plugin" for your classes. 👉 Use Case: Used in real-world applications for: Logging: Adding log() capabilities to any service. Authentication: Giving specific views the ability to check permissions. JsonSerialization: Adding a to_json() method to various data models. 👉 Why it’s Helpful: Modularity: Keeps small features separated. Avoids Duplication: Write once, "mix in" everywhere. Clean Hierarchy: Keeps your main class inheritance focused on what the object is, while Mixins handle what it does. 💻 Example: Python class LoggerMixin: def log(self, message): print(f"Log: {message}") class MyService(LoggerMixin): def run(self): self.log("Service is running") service = MyService() service.run() 🧠 What’s happening here? MyService isn't necessarily a "type of" Logger, but it wants the "ability" to log. By inheriting from LoggerMixin, it gains that specific tool without a complex setup. ⚡ Pro Tip: In frameworks like Django, Mixins are everywhere (like LoginRequiredMixin). They are the secret to keeping large codebases organized and DRY (Don't Repeat Yourself). 💬 Follow this series for more Topics #Python #BackendDevelopment #100TopicOfCode #SoftwareEngineering #LearnInPublic #Mixins #CleanCode
To view or add a comment, sign in
-
-
I’ve been polishing a personal project called ExcelAlchemy, and it’s now at its first stable public release: 2.0.0. ExcelAlchemy is a schema-driven Python library for Excel import/export workflows. It turns Pydantic models into typed workbook contracts: generate templates, validate uploads, write failures back to rows and cells, and keep workbook-facing output locale-aware. A lot of the work in this project was not just about making it work, but about making it feel like a real library: - modern Python typing and stricter static analysis - a cleaner validation pipeline around Pydantic v2 - protocol-based storage boundaries - pandas removed from the runtime path - contract tests, Ruff, Pyright, and release-focused documentation I also treated the repository as a design artifact: not just code, but a record of architectural tradeoffs, migration strategy, and package design decisions. Repo: https://lnkd.in/gV9jC87W #Python #OpenSource #Pydantic #ExcelAutomation #SoftwareArchitecture #DeveloperTools
To view or add a comment, sign in
-
-
☕ Why Choose JSONata? 📺 JSONata is a lightweight, open-source query and transformation language designed specifically to navigate, manipulate, and restructure JSON data. It uses a compact, declarative syntax to extract nested values, filter data, and restructure payloads into new formats, often acting as a powerful alternative to JavaScript or Python for data processing. 🗒️ Summary, JSONata is the best powerful query and transform the JSON structure. #jsonata #KPI #dashboard #design
To view or add a comment, sign in
-
-
Teaching a computer to play "Spot the Difference". 🔍🌳 Same Tree - LeetCode 100 - Easy (Blind 75) Comparing two binary trees to see if they are identical sounds complex because you have to check both the structure and the values at the exact same time. But recursion makes this surprisingly simple. (The 3 Rules of Inspection): Think of the recursive function as a Quality Inspector looking at two items (nodes), one from Tree P and one from Tree Q. The inspector only needs a checklist of 3 rules: 1. Are both spots empty? (if not p and not q:) -> Perfect, they match! Return True. 2. Is only one spot empty? (if not p or not q:) -> A structural mismatch! Return False. 3. Are the values different? (if p.val != q.val:) -> A value mismatch! Return False. If the two nodes pass all 3 checks, the inspector simply delegates the rest of the work: "These two nodes are fine. Now, go check both of their Left children together, and then check both of their Right children together." return self.isSameTree(p.left, q.left) and self.isSameTree(p.right, q.right) Key Learnings: 1) Simultaneous Traversal: We can recursively traverse two different data structures at the exact same time. 2) The Power of Base Cases: In recursion, your base cases (the 3 if-statements) are your edge-case handlers. Get them right, and the rest of the code writes itself. 3) Short-Circuit Evaluation: The 'and' operator ensures that if any left subtree fails the check, it won't even bother checking the right subtree. It immediately fails. Efficiency! Time and Space Complexity: Time Complexity: O(min(N, M)) — Where N and M are the number of nodes in the trees. We only compare up to the smaller tree before finding a mismatch (or all nodes if they match). Space Complexity: O(min(H1, H2)) — Where H is the height of the trees. This accounts for the recursive call stack. What is your favorite way to handle edge cases in Tree problems? Let's discuss in the comments! 👇 #LeetCode #BinaryTrees #Blind75 #DataStructures #Python #Recursion #TechInterviews #CodingJourney #SoftwareEngineering #MCAFreshers
To view or add a comment, sign in
-
-
🚀 Excited to share my first Python package smarteda! I’ve been working on strengthening my data analysis skills, and as part of that journey, I built and published my own package on PyPI. 🔹 What is smarteda? It’s a simple and beginner-friendly library designed to make Exploratory Data Analysis (EDA) faster and easier. 🔹 Why I built this? Instead of just learning concepts, I wanted to create something practical that solves real problems and can be used by others. 🔹 What I learned: Structuring a Python package Writing reusable code Publishing to PyPI Thinking like a developer, not just a learner 🔗 PyPI: https://lnkd.in/gA_VzM7K 🔗 GitHub: https://lnkd.in/g8heBZqi This is just the beginning. I’ll keep improving it and building more tools as I grow in data analytics and machine learning. 👉 Would love your feedback and suggestions! #Python #DataAnalytics #EDA #MachineLearning #PyPI #OpenSource #LearningInPublic
To view or add a comment, sign in
-
-
𝗗𝗮𝘆 𝟱𝟱/𝟳𝟱 | 𝗟𝗲𝗲𝘁𝗖𝗼𝗱𝗲 𝟳𝟱 𝗣𝗿𝗼𝗯𝗹𝗲𝗺: 2542. Maximum Subsequence Score 𝗗𝗶𝗳𝗳𝗶𝗰𝘂𝗹𝘁𝘆: Medium 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗦𝘂𝗺𝗺𝗮𝗿𝘆: You are given two arrays nums1 and nums2 of equal length and an integer k. The task is to select k indices such that the score is maximized. Score is defined as: (sum of selected elements from nums1) × (minimum of selected elements from nums2) 𝗠𝘆 𝗔𝗽𝗽𝗿𝗼𝗮𝗰𝗵: This problem is solved using a Greedy + Min Heap approach. • First, pair up elements of nums2 and nums1 as (efficiency, speed). • Sort the pairs in descending order of nums2 (efficiency). This ensures that at every step, the current nums2 value is the minimum in the chosen subsequence. • Use a Min Heap to maintain the k largest values from nums1 (speed). • Iterate through the sorted pairs: Add current speed to the heap Maintain a running sum of speeds If heap size exceeds k, remove the smallest speed When heap size equals k, compute score = sum × current efficiency Update maximum score This works because we fix the minimum (nums2) greedily and maximize the sum (nums1) using the heap. 𝗖𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆 𝗔𝗻𝗮𝗹𝘆𝘀𝗶𝘀: • Time Complexity: O(n log n + n log k) Sorting: O(n log n) Heap operations: O(n log k) • Space Complexity: O(n + k) 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆: When a problem involves maximizing a function with a minimum constraint, sort by that constraint and use a heap to optimize the remaining component efficiently. 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 𝗟𝗶𝗻𝗸: https://lnkd.in/geqU7Ptb #Day55of75 #LeetCode75 #DSA #Java #Python #MachineLearning #DataScience #ML #DataAnalyst #LearningInPublic #TechJourney #LeetCode
To view or add a comment, sign in
-
-
Learning about Lists vs Tuples today. 'Use a tuple when the data should NOT change. It is not just a technical choice — it is a communication to other developers.' That framing changed how I see it. In a data pipeline, your schema definition should be a tuple. Why? Because a schema that changes mid-pipeline is a bug. Making it a tuple communicates that contract in the code itself. Column thresholds? Tuple. Risk bucket boundaries? Tuple. Regulatory field classifications? Tuple. These must NEVER change during a run. Tuple enforces it. Lists are for things that grow and change: Collecting error records. Building dynamic filter lists. Accumulating batch results. My new rule: if it should not change, make it a tuple. The immutability is documentation. A 'basic' concept with a deeper principle hiding inside it. What Python design rules do you follow in your code? ---- #Python #DataEngineering #LearningInPublic #CleanCode #CodingTips
To view or add a comment, sign in
-
JSON Schemas: Pydantic Models for Trade Records What You Will Build Today By the end of this lesson you will have a working system that takes raw, unpredictable JSON from a real broker API and turns it into clean, guaranteed-correct Python objects — the kind of foundation that every serious trading system depends on. You will also understand why getting this wrong destroys accounts, and exactly how to get it right. https://lnkd.in/dAc7jR3Q
To view or add a comment, sign in
-
prefetch_related solves N+1. But it fetches everything at once. In a real system, that's often not what's needed. Example - - Fetching all orders for a user when only the last 5 are displayed. - Fetching all items in an order when only active ones matter. prefetch_related alone can't do this. It has no way to filter the prefetched queryset. This is exactly what the custom Prefetch() object is meant for. What Prefetch() adds - 1. The Prefetch() object wraps a relationship and lets full control over the prefetch query. Filter the related queryset. Order it. Annotate it. 2. All of this happens in a single additional query, not one per row. The N+1 fix stays intact. 3. to_attr can be used for storing results under a custom name. This matters when the same relationship needs to be prefetched in two different ways simultaneously. The real gains - → Prefetch() makes the ORM behave like a data pipeline. → It fetchs exactly the right data, shaped correctly, in the minimum number of queries. → No need of post-processing in Python. No filtering after the fact. Most Django codebases never reach for Prefetch() and pay the cost in over-fetching data at scale. I’m deep-diving into Django internals and performance. Do follow along and tell your experiences in comments. #Python #Django #DjangoInternals #SoftwareEngineering #BackendDevelopment
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development