Adding an index makes things fast. But keeping the wrong ones makes everything slower. 💸 I ran into this while debugging slow writes. SELECT queries were fast. But INSERT and UPDATE kept getting slower as data grew. Nothing was wrong with Django. The issue was in the database. Unused indexes. Every index is updated on every write. Even if no query ever uses it. So you end up paying a performance cost for nothing. More indexes ≠ better performance Sometimes, it is the opposite. Golden rule 👇 If an index is not being used → it is hurting your writes How to check: ● django-debug-toolbar ● pg_stat_user_indexes Find the “zombie indexes” Remove them Because optimization is not just adding things It is removing what no longer matters When was the last time you audited your database indexes? #Django #Python #BackendDevelopment #SoftwareEngineering #WebDevelopment #Database #PostgreSQL #PerformanceOptimization #CodingTips #TechCareer
Unused Database Indexes Hurt Performance
More Relevant Posts
-
One extra line in your Django queryset can turn a 20ms API into a 2s one. ⚡ I ran into this while optimizing an API that was fetching related data inside a loop. Everything worked. But response time kept increasing as data grew. The issue was the classic N+1 query problem. Django gives two powerful tools to solve it. 🔹 select_related Uses SQL JOIN Best for ForeignKey or OneToOne Fetches everything in a single query 🔹 prefetch_related Runs separate queries and combines results in Python Best for ManyToMany or reverse relationships Prevents large JOIN explosions The key difference is not syntax. It is how the data is fetched. Golden rule 👇 Single relationship → select_related Multiple relationship → prefetch_related Getting this wrong does not just slow things down. It can quietly break performance as your data grows. In my experience, fixing N+1 queries is one of the fastest ways to improve API performance in Django. Which one do you end up using more in your projects? #Django #Python #PostgreSQL #BackendDevelopment #DatabaseOptimization #SoftwareEngineering
To view or add a comment, sign in
-
-
Django's only() and defer() methods are often overlooked, yet they are essential for optimizing memory usage when fetching data from the database. Every field retrieved from the database consumes memory, and this can become significant with large models. Consider the following examples: - Fetching all fields when only two are needed: ```python users = User.objects.all() ``` - Instead, fetch only the necessary fields: ```python users = User.objects.only('id', 'email') ``` - Alternatively, defer the heavy fields that are not immediately required: ```python users = User.objects.defer('bio', 'profile_picture') ``` For instance, with a User model that includes a TextField for bio and an ImageField for profile picture, fetching 10,000 users for an email report can lead to significant memory savings. Using only('id', 'email') reduced memory usage by 60% for that query alone. When to use which method: - Use only() when you know exactly which fields you need. - Use defer() when you want to retrieve everything except a few heavy fields. This small change can lead to a big impact at scale. 🚀 #Django #Python #DjangoORM #BackendPerformance #PythonDev #BackendDevelopment #HappyLearning
To view or add a comment, sign in
-
Day 3: SQLite, SQLAlchemy & Alembic CRUD needs some memory, and that's what day 3 is about. We swap out the in-memory Python list for a real SQLite database — adding SQLAlchemy as the ORM and Alembic to manage schema changes over time. If you've used Eloquent + Laravel migrations, the mental model is nearly identical. Here's what we cover: → Setting up SQLAlchemy's three core pieces: engine, session factory, and declarative base → Defining ORM models with a one-to-many relationship (tasks → notes) and cascade deletes → Wiring Alembic so autogenerate actually works (the __init__.py trick that trips everyone up) → The session lifecycle: why db.commit() is the single most common mistake → Running a real schema migration — adding a column to a live database without touching the file directly The part worth highlighting: the service/route separation from Day 2 paid off immediately. The routes layer barely changed. The database swap was almost entirely contained to the service layer. That's the point of keeping them separate. By the end, you stop the server, restart it, and your data is still there. 👉🏼 Read the full article - link is in the comments. #Python #Starlette #SQLAlchemy #WebDevelopment #BackendDevelopment
To view or add a comment, sign in
-
I just published a piece on something I keep seeing in Python APIs: using SQLAlchemy by default — even when it’s not needed After working more directly with PostgreSQL, I started questioning this habit. Because the database is not just storage — it’s a core part of performance and system behavior. In many APIs, especially simple or performance-critical ones, I’ve found that: - ORM adds unnecessary abstraction - raw SQL gives better control over query shape - PostgreSQL features are easier to leverage directly and in some cases, it actually improves performance due to lower overhead So I wrote about: ->> when ORM makes sense ->> when it becomes overengineering ->> and why I prefer asyncpg + raw SQL in many cases Do you stick with ORM everywhere, or go raw SQL when performance matters? https://lnkd.in/dzZ7xvCS #python #postgresql #fastapi #backend #softwareengineering
To view or add a comment, sign in
-
PydanTable 1.17.0 has been released, and MongoDB is now officially part of the story. This release introduces an optional MongoDB execution engine, allowing work to remain on the MongoDB database side when supported. This means you can materialize data only when it is actually needed in the application, rather than pulling full result sets into Python first. Additionally, this version adds integration with Beanie, a popular Python ODM (object-document mapper) for MongoDB built on Pydantic. If your application already models MongoDB documents with Beanie, PydanTable can seamlessly integrate with that layer, ensuring your document models and typed, table-shaped workflow remain aligned without the need for a parallel schema. For more details, check out the documentation and release notes: - PyPI: https://lnkd.in/ez4NZMjT - Documentation: https://lnkd.in/eV4RTqZQ - Repository: https://lnkd.in/eVpjrcRX #Python #Pydantic #MongoDB #DataEngineering #OpenSource
To view or add a comment, sign in
-
𝐖𝐡𝐚𝐭 𝐢𝐟 𝐲𝐨𝐮 𝐧𝐞𝐯𝐞𝐫 𝐡𝐚𝐝 𝐭𝐨 𝐰𝐫𝐢𝐭𝐞 𝐫𝐚𝐰 𝐒𝐐𝐋 𝐚𝐠𝐚𝐢𝐧? That's Django's ORM (Object-Relational Mapper) in a nutshell. An ORM lets you interact with your database using Python instead of SQL. You define your data as Python classes (called models), and Django handles the database queries behind the scenes. Here's a simple example: Instead of writing: SELECT * FROM blog_post WHERE author_id = 1; You write: Post.objects.filter(author_id=1) Same result. Pure Python. No SQL dialect to worry about. What makes the Django ORM powerful: → Works with PostgreSQL, MySQL, SQLite, Oracle. Same code, different DB → Migrations: Change your model, run a command, database updates automatically → Querysets that are chainable, lazy, and incredibly readable → Relationships like ForeignKey, ManyToMany, OneToOne, all handled elegantly I've worked with Oracle 19c and PostgreSQL in production. The ORM made switching between them painless which required just a config change. The caveat? For very complex queries, raw SQL is still available. The ORM doesn't replace SQL knowledge. It reduces how often you need it. If you're building data-heavy apps, learning the Django ORM deeply is one of the best investments you can make. Tomorrow: Django REST Framework, turning Django into an API powerhouse. #Django #ORM #Python #BackendDevelopment #Database
To view or add a comment, sign in
-
-
🚀 Introducing numpy2: Production-Ready NumPy Just launched numpy2 on PyPI – the library that solves the #1 problem NumPy developers face when building APIs. The Problem: json.dumps(np.array([1, 2, 3])) # ❌ TypeError: Object of type int64 is not JSON serializable This happens constantly in production. NumPy types break JSON serialization, forcing developers to write messy custom encoders. The Solution: np2.to_json(arr) # ✅ '[1, 2, 3]' – Done! ⚡ Key Features: ✓ Zero-config JSON serialization for NumPy & pandas ✓ FastAPI, Flask, Django integration out-of-the-box ✓ Type-safe conversions (3.75x faster than manual code) ✓ Handles edge cases: NaN, Infinity, complex numbers ✓ MIT licensed, production-ready Why numpy2 stands out: • 1 import vs. 20 lines of boilerplate • Solves real-world pain points, not hypothetical ones • Used in high-traffic APIs handling millions of requests • Intuitive API – NumPy developers instantly understand it 📦 Install: pip install numpy2 📖 Docs: https://lnkd.in/djnmp7Jh Whether you're building data science APIs, ML pipelines, or real-time analytics platforms, numpy2 eliminates friction and lets you focus on your core logic. Drop a ⭐ on GitHub if this solves a problem you've faced! #NumPy #Python #WebDevelopment #FastAPI #DataScience #OpenSource #API #JSON #Serialization #Python3 #SoftwareDevelopment #GitHub #PyPI #DataEngineering #BackendDevelopment
To view or add a comment, sign in
-
UDFs vs. Native Functions: Don’t Reinvent the Wheel (It Just Gets Slower)! Are you a Python master who loves creating custom functions (UDFs) to apply to your PySpark DataFrames? 🛑 Stop right now! Although UDFs may seem like the perfect solution, they are Spark performance’s “kryptonite” because they force data serialization between the JVM (Java) and Python, killing speed. The evolution of PySpark has brought hundreds of native functions in the pyspark.sql.functions module. Using native functions instead of Python UDFs can make your code 10 to 100 times faster, since they run directly on the JVM with the Catalyst Optimizer (Source: Spark Performance Benchmark, 2022). Want a practical example? Instead of creating a UDF to convert strings to uppercase and add a value, look for native functions such as upper(), concat(), when(), col(). Almost everything you need already exists in an optimized form! Leave UDFs only for extremely complex business logic that cannot be solved with built-in functions. It’s the difference between riding a bicycle and taking a rocket! What was the most complex UDF you managed to replace with native functions? Tell us in the comments! #DataEngineering #PySpark #Productivity
To view or add a comment, sign in
-
-
🚀 Day 14: Database Connectivity in Python In real-world applications, data needs to be stored, managed, and retrieved efficiently. 👉 That’s where databases come in. Python allows us to connect with databases and perform operations like storing, updating, and retrieving data. 🔹 Common Databases: ✔ SQLite (lightweight, built-in) ✔ MySQL (widely used in web applications) 🔹 Basic Operations: ✔ Insert data ✔ Fetch data ✔ Update records ✔ Delete records 💡 Example (SQLite): import sqlite3 conn = sqlite3.connect("students.db") cursor = conn.cursor() cursor.execute("CREATE TABLE IF NOT EXISTS students (name TEXT)") cursor.execute("INSERT INTO students (name) VALUES ('Ali')") conn.commit() conn.close() 📌 Why it matters? Every application from small apps to large systems depends on databases. ✔ User data storage ✔ Authentication systems ✔ Transaction records Frameworks like Django make database handling even more powerful using ORM. 💡 A strong developer not only writes logic but also manages data efficiently. 📈 Step by step, building real-world backend skills. #Python #Database #BackendDevelopment #Programming #Developers #Django #SQL #LearningJourney
To view or add a comment, sign in
-
-
𝗢𝗻𝗲 𝘀𝗺𝗮𝗹𝗹 𝗲𝗿𝗿𝗼𝗿 𝘁𝗮𝘂𝗴𝗵𝘁 𝗺𝗲 𝗮 𝗯𝗶𝗴 𝗹𝗲𝘀𝘀𝗼𝗻 𝗮𝗯𝗼𝘂𝘁 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 🚀 Today while solving a SQL problem on LeetCode, I ran into this: TypeError: write() argument 1 must be unicode, not str At first glance, it looked like I had completely messed up. I rechecked my query. Tried different approaches. Still the same error. Then I realized the real issue… I had used ROWID — which works perfectly in Oracle. But LeetCode runs on MySQL, where ROWID doesn’t exist. And instead of a clear SQL error, it threw a Python error. That’s what made it confusing. That moment taught me something important: Not all errors mean your logic is wrong. Sometimes, you just need to understand the environment you’re working in. Debugging isn’t only about fixing code… It’s about thinking deeper and asking the right questions. Back to learning 🚀 #SQL #Debugging #LeetCode #BackendDevelopment #LearningJourney #ProblemSolving
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development