Day 130-131 📘 Python Full Stack Journey – Django Query Filtering 🔍 Today I explored how to filter data in Django using QuerySets, which is a powerful way to retrieve specific records from the database. 🚀 🎯 What I learned today: 🔎 Django Filter Queries Used different filtering techniques on the Employee model: startswith → fetch records where a field starts with a value endswith → fetch records where a field ends with a value icontains → case-insensitive search within a field Example: Employee.objects.filter(fullname__startswith='A') 📊 Displaying Filtered Data Passed multiple filtered datasets from views → template Used Django template loops to display results dynamically ⚙️ Multiple Filters in One View Combined multiple queries in a single function Even filtered data from different models in one page 💡 This makes it easy to build features like search, filtering, and categorization in web applications. Django User Profile (One-to-One Relationship) Today I implemented a User Profile system in Django using a One-to-One relationship, taking a big step toward building personalized user experiences. 👤 One-to-One Relationship Created a Profile model linked to Django’s built-in User model Ensured one user → one profile using: user = models.OneToOneField(User, on_delete=models.CASCADE) Understood how Django handles relationships like one-to-one, one-to-many, and many-to-many 🗄️ Profile Model Fields Added fields like: Bio Location Birth date Profile image Used: blank=True and null=True for optional fields ImageField for uploading profile pictures 🌐 Profile Display & Editing Displayed logged-in user details using: {{ request.user.username }} Created a profile page and an edit form page Used get_or_create() to automatically create a profile if it doesn’t exist 🔐 Access Control Used @login_required decorator to restrict access to logged-in users only 📸 Handling File Uploads Used enctype="multipart/form-data" for image uploads Displayed uploaded images dynamically in templates This session helped me understand how to build user-specific features and profiles, along with how Django makes data retrieval flexible and efficient—both of which are essential for real-world applications like social platforms, search systems, and dashboards. Excited to keep building more personalized and dynamic applications while exploring advanced queries next! 💻✨ #Django #Python #FullStackDevelopment #WebDevelopment #Backend #BackendDevelopment #Database #QuerySets #UserProfile #CodingJourney #LearningToCode #Upskilling #ContinuousLearning
Django Query Filtering and User Profile Implementation
More Relevant Posts
-
𝗗𝗮𝘆 𝟲𝟰: 𝗛𝗼𝘄 𝗣𝘆𝘁𝗵𝗼𝗻 𝗖𝗹𝗮𝘀𝘀𝗲𝘀 𝗕𝗲𝗰𝗼𝗺𝗲 𝗗𝗷𝗮𝗻𝗴𝗼 𝗠𝗼𝗱𝗲𝗹𝘀 Today I linked two big ideas. Python object oriented programming. And Django models. They are the same thing. A Django model is just a Python class. Here is what I learned about Python classes. A class is a blueprint. An object is a built thing from that blueprint. - class Car: defines the blueprint. - __init__ runs when you build the object. self points to that new object. - Instance attributes like self.brand are unique to each object. - Class attributes like company are shared by all objects. Methods live inside classes. - Instance method: uses self. Works with your object's data. - Class method: uses cls. Decorated with @classmethod. Works with the class itself. - Static method: uses no self or cls. Decorated with @staticmethod. Just a function inside the class. Inheritance lets a child class reuse a parent class. - class Dog(Animal): Dog gets all of Animal's code. - Use super() to run the parent's __init__. Python does not have private. It has conventions. - _name is protected. A hint to other coders. - __name is private. Python changes its name to _ClassName__name. Dunder methods define how your object acts with Python's built-ins. - __str__: for print() and str(). - __len__: for len(). - __eq__: for ==. Abstract Base Classes force subclasses to write specific methods. - from abc import ABC, abstractmethod - @abstractmethod means "you must write this method". Now for Django. A Django model is a Python class that inherits from models.Model. - Each class attribute becomes a database column. - Django reads these attributes and creates the SQL table for you. You do not write SQL. You run two commands. - python manage.py makemigrations - python manage.py migrate The __str__ method in your model controls what you see in the Django admin. Without it you see "Post object (1)". With it you see your post title. OOP is the foundation. Django models are the practical application. A model class maps directly to a database table. Fields map to columns. Your __str__ method controls the display. Understanding Python classes first makes Django models obvious. Source: https://lnkd.in/gChPWWZS
To view or add a comment, sign in
-
🚀 Write Cleaner, Faster, Scalable Python — For System Design & Product Roles Syntax isn’t enough. These concepts separate good devs from great ones 👇 --- 🔹 References, Not Values a=[1,2]; b=a; b.append(3) print(a) # [1,2,3] 🌍 Shared object (like a Google Doc) → impacts bugs & performance --- 🔹 "==" vs "is" a=[1]; b=[1] a==b # True a is b # False 👉 Value vs identity ✅ Use "is None" --- 🔹 "__dict__" (Object Storage) u.__dict__ # {'name': 'Abhi', 'age': 25} 🌍 Backbone of Django models / serializers --- 🔹 "__slots__" (Memory Optimization) class U: __slots__=['name'] ⚡ Saves RAM in large-scale object creation --- 🔹 "setattr" (Dynamic Attributes) for k,v in data.items(): setattr(u,k,v) 🌍 Map API JSON → objects --- 🔹 "getattr" (Dynamic Execution) getattr(obj,"add")(2,3) 🌍 Replace bulky if-else (plugins, routers) --- 🔹 Decorators (Reusable Logic) @auth def api(): ... 🌍 Used in Django, Flask (auth, logging) --- 🔹 Decorator Order @A @B # A(B(func)) --- 🔹 "__new__" vs "__init__" Object creation vs initialization 🌍 Used in Singletons (DB connections) --- 🔹 O(1) Lookup (dict/set) "x" in set_data # fast 🌍 Caching, dedup, auth --- 🔹 """.join()" > "+=" "".join(list_data) ⚡ Avoids repeated allocations --- 🔹 List Comprehension [x*x for x in range(5)] ⚡ Cleaner + faster transforms --- 🔹 Generators yield i 🌍 Handle large data without memory crash --- 🔹 "islice" (Lazy slicing) list(islice(gen(),5)) --- 🔹 Threading vs AsyncIO - Threading → I/O tasks - AsyncIO → high concurrency (FastAPI) --- 🔹 EAFP (Pythonic) try: val=d["k"] except KeyError: val=None ⚡ Faster than pre-checks --- 💡 Powers: Django, FastAPI, AI pipelines, scalable systems 🎯 Grow to Architect Level: ✔ Clean code ✔ Scalable design ✔ Strong fundamentals --- #Python #Backend #SystemDesign #SoftwareEngineer #AdvancedPython #TechCareers
To view or add a comment, sign in
-
-
Django ORM Internals and Query Optimization — What Every Backend Developer Should Understand What is Django ORM Really Doing? The Django ORM is an abstraction layer that converts Python code into SQL queries. When you write: books = Book.objects.all() Django does not immediately hit the database. Instead, it creates a QuerySet — a lazy object that represents the SQL query. The actual database call happens only when the data is evaluated. Examples of evaluation: Iterating over QuerySet Converting to list Accessing elements This concept is called lazy loading. How QuerySets Work Internally A QuerySet goes through multiple steps: Query construction Django builds a SQL query internally using a query compiler Optimization It decides joins, filters, and conditions Execution The query is sent to the database Result caching Results are stored to avoid repeated queries This means: Reusing the same QuerySet can save queries Creating new QuerySets repeatedly can hurt performance The Real Problem: N+1 Queries One of the biggest mistakes developers make: books = Book.objects.all() for book in books: print(book.author.name) This creates: 1 query for books N queries for authors This is inefficient and slows down applications at scale. Optimization Techniques 1.select_related() Used for ForeignKey and OneToOne relationships. books = Book.objects.select_related('author') This performs a SQL JOIN and fetches related data in a single query. 2.prefetch_related() Used for ManyToMany or reverse relationships. authors = Author.objects.prefetch_related('books') This runs separate queries but combines results efficiently in Python. 3.only() and defer() Fetch only required fields: Book.objects.only('title') Reduces data transfer and speeds up queries. 4.values() and values_list() Return dictionaries or tuples instead of full model objects: Book.objects.values('title', 'price') Useful for APIs and data-heavy operations. Why This Matters Poor ORM usage leads to: Slow APIs High database load Bad user experience Optimized queries result in: Faster response times Better scalability Efficient resource usage #Python #Django #ORM #BackendFramework #BackendDevelopment #SoftwareDevelopment #QuerySets #SQL #Optimization #Scalable #Fast_API_Response
To view or add a comment, sign in
-
-
bulk_create and bulk_update don't behave like regular Django saves. Most developers find out the hard way or never realise it! The assumption - they're just faster versions of calling .save() in a loop. Same behaviour, better performance. save() on a single instance does several things - 1. runs pre_save and post_save signals 2. calls full_clean() for validation, handles auto-generated fields 3. returns the saved instance with its new primary key bulk_create and bulk_update bypass all of it. No signals. No validation. No per-instance hooks. Django hands a list of objects directly to the database and walks away. bulk_create - the PK problem ~ By default, bulk_create returns instances without primary keys populated(in Python object) - unless update_conflicts or returning_fields is explicitly set. ~ ignore_conflicts=True silently swallows insert failures - no exception, no log, no signal. A uniqueness violation disappears without a trace. bulk_update - what it can't do ~ bulk_update requires an explicit list of fields. Miss a field - it doesn't update. ~ It cannot update fields using expressions - no F(), no computed values. ~ And like bulk_create - no post_save signals fire. Anything listening for model changes never knows. The performance gain is real - 1000 inserts in one query vs 1000 round trips. But the tradeoffs are real too. Takeaway — -> bulk_create / bulk_update - no signals, no validation, no per-instance hooks -> bulk_create → PKs not populated in Python object by default on PostgreSQL, not at all on MySQL -> ignore_conflicts=True → silent failure, uniqueness violations disappear without exception -> bulk_update → explicit fields only, no F() expressions, missed fields silently skip Have you been bitten by missing signals after a bulk operation? How did you handle downstream consistency? #Python #Django #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
⚡ Want to become a Python Developer? Here’s a clear roadmap. Most people feel confused about what to learn and where to start. So I simplified everything into 3 phases with the most important keywords. Basics (Build Your Foundation) 🖥️ Core Functions print() → Display output input() → Take user input 🔢 Data Types int, float, str, bool → Store different types of data type() → Check data type ⚙️ Operators + - * / → Calculations == > < → Comparisons and or not → Logic building 🧠 Conditions if → Check condition elif → Multiple conditions else → Default case 🔁 Loops for → Loop through items while → Repeat until false break → Stop loop continue → Skip step 🧩 Functions def → Create function return → Send result 📦 Data Structures list [] → Collection tuple () → Fixed data dict {} → Key-value set {} → Unique values ⚡ Utilities len() → Length range() → Sequence in → Check existence 🛠️ Error Handling try / except → Handle errors Intermediate (Start Building Real Projects) 📁 File Handling open(), read(), write() → Work with files with → Auto close file ⚠️ Exception Handling try → Run code except → Handle error else → If no error finally → Always runs ⚡ Short Functions lambda → One-line function map() → Apply function filter() → Filter data zip() → Combine data 🧠 Comprehensions List & Dict comprehensions → Short, clean loops 🏗️ OOP class → Blueprint object → Instance __init__ → Constructor self → Current object 📦 Modules import → Use code from ... import → Specific import 🔄 Generators yield → Efficient data handling 🌐 APIs & Data requests → Call APIs json → Handle data Advanced (Become Industry-Ready) 🏗️ Advanced OOP Inheritance, Polymorphism, Encapsulation, Abstraction 🎯 Decorators @decorator → Modify functions ⚡ Concurrency threading, multiprocessing → Run tasks together 🚀 Async Programming async / await → Non-blocking code 🗄️ Databases SQL, ORM → Store & manage data 🌍 Web Development Django → Full framework FastAPI → High-performance APIs 🔄 Version Control git, GitHub → Track & share code ⚡ Performance Optimization → Make code faster 🔐 Security Authentication, Hashing → Protect systems Motivation alone is not enough. Consistency builds skill. 💬 Which phase are you currently in? #Python #Programming #AI #MachineLearning #Developers #Coding #Learning #XevenSolutions
To view or add a comment, sign in
-
Python Django Split Settings (Best Practice Structure) In production-ready Django projects, using a single settings.py quickly becomes messy and risky. A better approach is splitting settings by environment: settings/ base.py development.py production.py 🧱 1. BASE.PY (Shared Configuration) 👉 This is the core of the project (used everywhere) ✅ Keep here: INSTALLED_APPS MIDDLEWARE (common only) ROOT_URLCONF TEMPLATES WSGI / ASGI AUTH_USER_MODEL LANGUAGE / TIME_ZONE STATIC_URL, MEDIA_URL Third-party apps ❌ DO NOT include here: DEBUG DATABASES ALLOWED_HOSTS Security settings (SSL, HSTS) Environment-specific configs 👉 Rule: Only shared configuration 🧪 2. DEVELOPMENT.PY (Local Environment) 👉 Optimized for speed and debugging ✅ Add here: from .base import * DEBUG = True ALLOWED_HOSTS = [] Database: DATABASES = { "default": { "ENGINE": "django.db.backends.sqlite3", "NAME": BASE_DIR / "db.sqlite3", } } Dev-friendly settings: SECURE_SSL_REDIRECT = False SESSION_COOKIE_SECURE = False CSRF_COOKIE_SECURE = False EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend" ❌ Avoid: PostgreSQL config SSL/HSTS production email servers 🚀 3. PRODUCTION.PY (Live Server) 👉 Secure, optimized, real deployment settings ✅ Add here: from .base import * from decouple import config DEBUG = False Allowed hosts: ALLOWED_HOSTS = ["yourdomain.com", "www.yourdomain.com"] Database (PostgreSQL): DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": config("DB_NAME"), "USER": config("DB_USER"), "PASSWORD": config("DB_PASSWORD"), "HOST": config("DB_HOST"), "PORT": config("DB_PORT"), } } Security (production only): SECURE_SSL_REDIRECT = True SESSION_COOKIE_SECURE = True CSRF_COOKIE_SECURE = True SECURE_HSTS_SECONDS = 31536000 SECURE_HSTS_INCLUDE_SUBDOMAINS = True SECURE_HSTS_PRELOAD = True Static files: STATIC_ROOT = BASE_DIR / "staticfiles" ❌ Avoid in production: DEBUG = True SQLite console email backend open ALLOWED_HOSTS insecure cookies 🧠 Key Idea (Simple Rule) FilePurposebase.pyShared foundationdevelopment.pyFast local developmentproduction.pySecure live system ⚡ Why this matters ✔ prevents production mistakes ✔ improves security ✔ separates environments cleanly ✔ easier scaling & deployment ✔ industry standard approach 🚀 Final insight A professional Django project is not defined by features — but by how cleanly it separates environments, security, and configuration logic.
To view or add a comment, sign in
-
𝐃𝐣𝐚𝐧𝐠𝐨 𝟏𝟎𝟏 𝐟𝐨𝐫 𝐏𝐲𝐭𝐡𝐨𝐧𝐢𝐬𝐭𝐚𝐬 🐍 | 𝐔𝐧𝐝𝐞𝐫𝐬𝐭𝐚𝐧𝐝𝐢𝐧𝐠 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞 𝐐𝐮𝐞𝐫𝐲 𝐄𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲 As a Django application grows, database performance becomes a central topic. One of the most common bottlenecks is the N+1 Query Problem. 💡 𝐓𝐡𝐞 𝐅𝐚𝐜𝐭: By default, Django’s ORM uses "lazy loading." It only fetches related data at the moment it is accessed. While this saves memory, it can lead to an excessive number of database hits during loops. The N+1 Scenario: If you want to display a list of 50 Books and their Authors: One query fetches the 50 books. As you loop through the books to show the author's name, Django performs a new database lookup for each individual author. 👉 This results in 51 database trips for a single list. Technical Solutions: 🚀 select_related() This is used for "one-to-many" or "one-to-one" relationships. It performs an SQL JOIN in the initial query. Book.objects.select_related('author').all() Instead of many trips, Django fetches everything in one single query. 🚀 prefetch_related() This is used for "many-to-many" or reverse relationships. It performs a separate lookup for the related objects and joins the data in Python. This effectively reduces hundreds of queries down to two. 🔍 How to identify it: Tools like django-debug-toolbar help visualize how many queries are fired per request. If you see the same SQL pattern repeating multiple times, it’s a clear indicator that the ORM needs optimization. 𝐓𝐡𝐞 𝐁𝐨𝐭𝐭𝐨𝐦 𝐋𝐢𝐧𝐞: Database "round-trips" are expensive. Using these tools ensures that your application remains performant and scalable, regardless of how much data you are handling. #Python #Django #WebDevelopment #Database #SoftwareEngineering
To view or add a comment, sign in
-
Day-117 📘 Python Full Stack Journey – Django Models to UI Rendering Today I worked on a complete flow in Django — from creating database models to displaying dynamic data on a webpage. This felt like a true full-stack experience! 🚀 🎯 What I learned today: 🗄️ Model Creation (Database Table) Defined a model in models.py: class Course(models.Model): course_name = models.CharField() course_description = models.TextField() Learned: CharField → for small text data TextField → for larger content Understood that inheriting from models.Model creates a database table 🔄 Migrations & Admin Integration Applied database changes using: py manage.py makemigrations py manage.py migrate Registered model in admin.py: admin.site.register(Course) Managed data through Django Admin (add, edit, delete) 💡 Also learned that missing migrations can cause errors like “no such table” 🌐 Fetching & Displaying Data Retrieved data in views.py: details = { 'data': Course.objects.all() } Passed data to template and displayed using loop: {% for i in data %} <h1>{{i.course_name}}</h1> <p>{{i.course_description}}</p> {% endfor %} 🎨 Styling & Layout Used Flexbox/Grid to design responsive course cards Created a clean UI layout for displaying multiple records dynamically This session helped me understand how Django connects database → backend → frontend, making applications truly dynamic and data-driven. Excited to build more real-world features using Django! 💻✨ #Django #Python #FullStackDevelopment #WebDevelopment #Backend #Frontend #Database #CodingJourney #LearningToCode #Upskilling #ContinuousLearning
To view or add a comment, sign in
-
-
After working with Django REST Framework (DRF) for several years, one pattern shows up again and again: when an API is slow, it's almost always because of inefficient database access — not DRF itself. Here are some practical techniques I regularly use to make DRF APIs faster in production: 1. Use select_related for ForeignKey joins: If you're accessing related objects inside a loop, you're probably hitting the database multiple times. Example: orders = Order.objects.select_related('user') 2. Use prefetch_related for ManyToMany / reverse relations: For relationships like many-to-many or reverse foreign keys, use prefetch_related. Example: products = Product.objects.prefetch_related('tags') Django runs a separate query but joins them in Python efficiently — much faster than repeated DB hits. 3. Always use Pagination: Returning 10,000 records in a single API call will kill performance. In DRF: REST_FRAMEWORK = { 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 20 } 4. Add Database Indexing: If you're filtering or ordering frequently, indexing is critical. Example: class Order(models.Model): created_at = models.DateTimeField(db_index=True) status = models.CharField(max_length=20, db_index=True) Queries like: Order.objects.filter(status='completed').order_by('-created_at') Become significantly faster with proper indexing. 5. Use Caching strategically: If your API response doesn’t change frequently, cache it. This avoids hitting the database repeatedly for the same data. In real-world systems, combining these techniques can reduce response time from seconds to milliseconds.
To view or add a comment, sign in
-
-
Five things I learned building a Claude Code plugin. Each one is a decision file the plugin captured while I was building it. Link in comments. (When I say "I wrote" below, I mean Claude wrote most of it while I supervised and said "no, not like that." Claude Code builds Claude Code plugins now - fun times.) 1. Markdown files are the truth. Everything else is a cache. Decisions are .md files in .claude/decisions/, plus a auto-generated list at claude/rules/decisions.md that loads at session start. A SQLite search index sits alongside and rebuilds from markdown on demand. Humans and agents both read markdown natively. No parsing layer in between. Karpathy's LLM knowledge bases post took this mainstream. The decision file here predates it, and Claude Code's memory uses the same principle. Not a trend - just what works. 2. No external libraries - just Python's standard library Just what comes with Python. No pip install anything. Took more work - custom YAML parser, FTS5 queries by hand. (Python stdlib got TOML but still no YAML!) - but the plugin can never break someone's Python environment. 3. A tiny bash layer so hooks stay fast Claude Code runs your plugin on every tool use. Python takes ~100ms to start up, so 50 tool calls adds 5 seconds of lag per session. A bash script catches events that don't need Python. Latency stays under 10ms. The first version ran Python on everything - my own plugin annoyed me. 4. A strict order for when policies fire The plugin runs a handful of policies on every hook - detecting decisions, injecting context, nudging. They fire independently but their outputs merge. Without a firing order, weird bugs emerge. A nudge fires before its context. A validation runs after the file is already written. Now there's a fixed order: block, lifecycle, context, nudges. Even if a policy says "reject", the result gets forced to "allow" - nudge-don't-block locked in at the architecture level. A policy can misfire, but none can stop Claude. 5. Skill, hooks, CLI - each does one job Claude Code plugins have three places for logic: a skill (markdown that tells Claude how to behave), hooks (code that runs on events), and a CLI. My first version stuffed everything into the skill. 200 lines of templates, validation, search logic - trying to be an entire program written in English. Now each layer has one job. The skill says what to do - about 60 lines, nothing more. Hooks enforce correctness. The CLI does the computation. The real reason this matters: LLM work costs tokens and is probabilistic. Local code is free and deterministic. Move what you can down to the CLI. --- Would love to hear from others building Claude Code plugins - or thinking about it. What's working, what's stuck. #ClaudeCode #PluginDevelopment #Python #OpenSource #DevTools
To view or add a comment, sign in
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development