If your Django API is slow, check this first: Count your database queries. A view that looks clean can still execute: 1 main query N queries for related objects M queries inside serializers Suddenly one API call = 150+ DB hits. Before adding caching: Use select_related Use prefetch_related Add proper DB indexes Analyze query plans Most performance issues aren’t solved by Redis. They’re solved by understanding your ORM. #Django #Python #ORM #DatabaseOptimization #BackendEngineering #PerformanceTuning
Optimize Django API Performance with select_related and prefetch_related
More Relevant Posts
-
🐍 MongoEngine Installation – Python ODM for MongoDB MongoEngine is an Object Document Mapper (ODM) for MongoDB that allows developers to interact with MongoDB using Python classes and objects instead of raw queries. Before installing MongoEngine, MongoDB must already be installed and the MongoDB server should be running. python explanations (29) The easiest way to install MongoEngine is by using the PIP installer: pip3 install mongoengine If your Python installation does not include Setuptools, you can download MongoEngine manually and install it using: python setup.py install MongoEngine relies on a few dependencies, including: pymongo ≥ 3.4 dnspython ≥ 3.6.1 python explanations (29) After installation, you can verify that MongoEngine is correctly installed by checking its version in Python: import mongoengine print(mongoengine.__version__) If the installation is successful, Python will display the installed version (for example 0.29.1). python explanations (29) 💡 MongoEngine simplifies MongoDB development in Python by providing an object-oriented approach for defining schemas, performing queries, and managing database operations. #Python #MongoDB #MongoEngine #Database #NoSQL #PythonProgramming #DataEngineering #BackendDevelopment #AshokIT
To view or add a comment, sign in
-
**Rate Limiting in APIs** 🚦 **Why Rate Limiting is Critical for APIs** While building production APIs, one challenge is preventing abuse and ensuring fair usage. That’s where **Rate Limiting** comes in. 🔹 What is Rate Limiting? It restricts the number of requests a client can make within a time window. Example: • 100 requests per minute per user 🔹 Why it matters: ✔ Prevents API abuse ✔ Protects backend resources ✔ Avoids DDoS-style overload ✔ Ensures fair access for all users 🔹 Common techniques: • Token Bucket • Fixed Window Counter • Sliding Window 🔹 Implementation in Python: FastAPI + Redis or middleware-based throttling. In production systems, rate limiting is often handled via: • API Gateways • Nginx • Redis-based throttling Have you implemented rate limiting in your APIs? Curious to hear how others solve it. 👇 #BackendEngineering #Python #FastAPI #APIDesign #RateLimiting
To view or add a comment, sign in
-
-
I was teaching a class on serverless backend. Lambda, DynamoDB, API Gateway. The code was clean. The logic was simple. Read from the database, update a value, return the result. We hit Test. "Object of type Decimal is not JSON serializable." Everything looked fine. The DynamoDB item was there. The number was a number. So why was Python refusing to serialize it? Here is what nobody tells you. DynamoDB does not return integers. It returns Decimal objects. Python's json.dumps cannot serialize Decimal. So it breaks. The fix is one word. int() 15 minutes debugging. 3 seconds to fix once you know. #AWS #DynamoDB #Lambda #Serverless #Python
To view or add a comment, sign in
-
-
## In PySpark architecture, the driver node contains a container-like component called the Application Master. Inside it, there is the Application Driver, which runs on the JVM and has the standard main() function. If we write code in Java or Scala, it directly runs on this JVM driver, so no additional layer is needed. However, when we write code in Python (PySpark), a separate PySpark driver is required. This Python driver communicates with the JVM-based driver, and the interaction between them is handled by Py4J, which enables conversion and communication between Python and JVM objects. On the worker node side, things are simpler. Each worker primarily runs a JVM process that executes tasks sent by the driver. However, when Python-specific logic is involved—such as using UDFs (User Defined Functions)—a Python worker (or wrapper) is also started alongside the JVM to execute the Python code. So in summary: the driver involves both Python and JVM layers connected via Py4J in PySpark, while workers are mostly JVM-based, with Python processes added only when needed (e.g., for UDFs).
To view or add a comment, sign in
-
Important updates just landed across the Apache Kafka client ecosystem. Async support for Python is now GA, improved Avro support for Schema Registry and more. These updates make it easier to run Kafka clients reliably in production. Check them out below! ⤵️
To view or add a comment, sign in
-
🚀 CRUD Operations with FastAPI & PostgreSQL Built a simple and powerful backend using FastAPI + PostgreSQL to perform: ✔️ Create ✔️ Read ✔️ Update ✔️ Delete ⚡ FastAPI for speed & validation 📊 PostgreSQL for reliable data storage 💡 CRUD is the backbone of every backend system—master it to build real-world applications. #FastAPI #Python #PostgreSQL #CRUD #BackendDevelopment
To view or add a comment, sign in
-
Most Python APIs in production still default to gzip. But Python 3.14 quietly removed the biggest reason for that habit. For years, backend engineers knew 𝐙𝐬𝐭𝐚𝐧𝐝𝐚𝐫𝐝 (zstd) was better than gzip in both compression ratio and decompression speed. The real issue wasn’t performance. It was deployment friction. Before 𝐏𝐲𝐭𝐡𝐨𝐧 𝟑.𝟏𝟒, using zstd meant installing third-party C-extensions, which created problems for modern cloud deployments: • Larger Docker images • Complicated CI/CD builds • Painful AWS Lambda dependencies • Cross-platform compilation issues So most teams stayed with gzip because it was built-in and “good enough” Python 3.14 changes this. The standard library now includes native Zstandard support: 𝐜𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧.𝐳𝐬𝐭𝐝 No external packages. No native builds. Just import and use it. Why this matters for production APIs right now: 𝟭. 𝗦𝗺𝗮𝗹𝗹𝗲𝗿 𝗔𝗣𝗜 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲𝘀 Large JSON payloads compress significantly better than gzip, reducing network transfer time. 𝟮. 𝗙𝗮𝘀𝘁𝗲𝗿 𝗰𝗹𝗶𝗲𝗻𝘁 𝗱𝗲𝗰𝗼𝗺𝗽𝗿𝗲𝘀𝘀𝗶𝗼𝗻 Zstd decompresses faster, improving frontend rendering and perceived response time. 𝟯. 𝗟𝗼𝘄𝗲𝗿 𝗶𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗰𝗼𝘀𝘁𝘀 Better compression reduces S3 storage, bandwidth, and egress costs for high-traffic systems. 𝟰. 𝗖𝗹𝗲𝗮𝗻𝗲𝗿 𝘀𝗲𝗿𝘃𝗲𝗿𝗹𝗲𝘀𝘀 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 Lambda and container deployments no longer require bundled C-extensions. Sometimes meaningful performance improvements don’t come from new frameworks. They come from 𝐫𝐞𝐦𝐨𝐯𝐢𝐧𝐠 𝐭𝐡𝐞 𝐟𝐫𝐢𝐜𝐭𝐢𝐨𝐧 that prevented better tools from being used. 💬 Curious to hear from others: Would you migrate your API compression from gzip to zstd now that it’s part of the Python standard library? (FastAPI code example is in the first comment) #Python #BackendEngineering #FastAPI #SoftwareArchitecture #CloudComputing #AWS #APIDesign #PerformanceEngineering
To view or add a comment, sign in
-
-
Most performance problems are not CPU problems. They are database problems. Early in my career, when something was slow, my instinct was to look at the Python code. Later I learned the real bottleneck was almost always: Too many queries Inefficient joins Missing indexes Loading more data than necessary A single endpoint can look perfectly fine in code and still generate hundreds of database queries. A few habits that made a big difference for me: Always checking query counts during development Using select_related and prefetch_related intentionally Avoiding loading full objects when only a few fields are needed Being careful with nested serializers in APIs Django’s ORM is incredibly productive. But performance comes from understanding what SQL is actually being executed behind the scenes. The ORM abstracts the database. It does not eliminate it. Hashtags #Django #Python #BackendEngineering #PerformanceOptimization #DatabasePerformance #SoftwareEngineering
To view or add a comment, sign in
-
-
🐍 I ported a Java Terraform metrics tool to Python and validated it against 5,594 code blocks to make sure everything was perfect. 🔧 I needed Infrastructure as Code (IaC) quality metrics as reward signals for fine-tuning LLMs — TerraMetrics had exactly what I wanted, except it was a .jar file in a Python pipeline. ⚡So I built pyterametrics. I validated it against 3 open-source Terraform repos to find: • 97.59% block identity match (and interesting reasons for why the rest didn't match) • Zero metric discrepancies on matched blocks 📊 See the full write-up on: Medium: https://lnkd.in/ghcasE4q My site: https://lnkd.in/gVVWhhGi You can use it today with pip install pyterametrics!
To view or add a comment, sign in
-
Python MongoDB Authorization API Cuts P99 Latency from 3.2s to 650ms 📌 A Python-based authorization API slashed its P99 latency by 80%-from 3.2 seconds to just 650ms-by optimizing MongoDB queries, connection pooling, and asynchronous I/O. The breakthrough highlights how targeted database tuning can dramatically improve performance in high-concurrency systems without overhauling architecture. 🔗 Read more: https://lnkd.in/dc2kCurv #Python #Mongodb #Authorizationapi #Latencyoptimization #Policydriven
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development