Python Django Split Settings (Best Practice Structure) In production-ready Django projects, using a single settings.py quickly becomes messy and risky. A better approach is splitting settings by environment: settings/ base.py development.py production.py 🧱 1. BASE.PY (Shared Configuration) 👉 This is the core of the project (used everywhere) ✅ Keep here: INSTALLED_APPS MIDDLEWARE (common only) ROOT_URLCONF TEMPLATES WSGI / ASGI AUTH_USER_MODEL LANGUAGE / TIME_ZONE STATIC_URL, MEDIA_URL Third-party apps ❌ DO NOT include here: DEBUG DATABASES ALLOWED_HOSTS Security settings (SSL, HSTS) Environment-specific configs 👉 Rule: Only shared configuration 🧪 2. DEVELOPMENT.PY (Local Environment) 👉 Optimized for speed and debugging ✅ Add here: from .base import * DEBUG = True ALLOWED_HOSTS = [] Database: DATABASES = { "default": { "ENGINE": "django.db.backends.sqlite3", "NAME": BASE_DIR / "db.sqlite3", } } Dev-friendly settings: SECURE_SSL_REDIRECT = False SESSION_COOKIE_SECURE = False CSRF_COOKIE_SECURE = False EMAIL_BACKEND = "django.core.mail.backends.console.EmailBackend" ❌ Avoid: PostgreSQL config SSL/HSTS production email servers 🚀 3. PRODUCTION.PY (Live Server) 👉 Secure, optimized, real deployment settings ✅ Add here: from .base import * from decouple import config DEBUG = False Allowed hosts: ALLOWED_HOSTS = ["yourdomain.com", "www.yourdomain.com"] Database (PostgreSQL): DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": config("DB_NAME"), "USER": config("DB_USER"), "PASSWORD": config("DB_PASSWORD"), "HOST": config("DB_HOST"), "PORT": config("DB_PORT"), } } Security (production only): SECURE_SSL_REDIRECT = True SESSION_COOKIE_SECURE = True CSRF_COOKIE_SECURE = True SECURE_HSTS_SECONDS = 31536000 SECURE_HSTS_INCLUDE_SUBDOMAINS = True SECURE_HSTS_PRELOAD = True Static files: STATIC_ROOT = BASE_DIR / "staticfiles" ❌ Avoid in production: DEBUG = True SQLite console email backend open ALLOWED_HOSTS insecure cookies 🧠 Key Idea (Simple Rule) FilePurposebase.pyShared foundationdevelopment.pyFast local developmentproduction.pySecure live system ⚡ Why this matters ✔ prevents production mistakes ✔ improves security ✔ separates environments cleanly ✔ easier scaling & deployment ✔ industry standard approach 🚀 Final insight A professional Django project is not defined by features — but by how cleanly it separates environments, security, and configuration logic.
Django Settings Best Practice: Split Settings by Environment
More Relevant Posts
-
Five things I learned building a Claude Code plugin. Each one is a decision file the plugin captured while I was building it. Link in comments. (When I say "I wrote" below, I mean Claude wrote most of it while I supervised and said "no, not like that." Claude Code builds Claude Code plugins now - fun times.) 1. Markdown files are the truth. Everything else is a cache. Decisions are .md files in .claude/decisions/, plus a auto-generated list at claude/rules/decisions.md that loads at session start. A SQLite search index sits alongside and rebuilds from markdown on demand. Humans and agents both read markdown natively. No parsing layer in between. Karpathy's LLM knowledge bases post took this mainstream. The decision file here predates it, and Claude Code's memory uses the same principle. Not a trend - just what works. 2. No external libraries - just Python's standard library Just what comes with Python. No pip install anything. Took more work - custom YAML parser, FTS5 queries by hand. (Python stdlib got TOML but still no YAML!) - but the plugin can never break someone's Python environment. 3. A tiny bash layer so hooks stay fast Claude Code runs your plugin on every tool use. Python takes ~100ms to start up, so 50 tool calls adds 5 seconds of lag per session. A bash script catches events that don't need Python. Latency stays under 10ms. The first version ran Python on everything - my own plugin annoyed me. 4. A strict order for when policies fire The plugin runs a handful of policies on every hook - detecting decisions, injecting context, nudging. They fire independently but their outputs merge. Without a firing order, weird bugs emerge. A nudge fires before its context. A validation runs after the file is already written. Now there's a fixed order: block, lifecycle, context, nudges. Even if a policy says "reject", the result gets forced to "allow" - nudge-don't-block locked in at the architecture level. A policy can misfire, but none can stop Claude. 5. Skill, hooks, CLI - each does one job Claude Code plugins have three places for logic: a skill (markdown that tells Claude how to behave), hooks (code that runs on events), and a CLI. My first version stuffed everything into the skill. 200 lines of templates, validation, search logic - trying to be an entire program written in English. Now each layer has one job. The skill says what to do - about 60 lines, nothing more. Hooks enforce correctness. The CLI does the computation. The real reason this matters: LLM work costs tokens and is probabilistic. Local code is free and deterministic. Move what you can down to the CLI. --- Would love to hear from others building Claude Code plugins - or thinking about it. What's working, what's stuck. #ClaudeCode #PluginDevelopment #Python #OpenSource #DevTools
To view or add a comment, sign in
-
I used Django REST Framework for 5 years. Then I discovered these hidden features. I felt like I had been using a Ferrari in first gear the whole time. 😅 Here are 5 DRF secrets most developers never find 👇 --- 🔐 Secret 1 — SerializerMethodField Most developers hardcode data in serializers. But SerializerMethodField lets you add ANY custom computed field dynamically. Example: Instead of storing full_name in the database — compute it on the fly from first_name + last_name. class UserSerializer(serializers.ModelSerializer): full_name = serializers.SerializerMethodField() def get_full_name(self, obj): return f"{obj.first_name} {obj.last_name}" No extra DB column. No migration. Just clean data. ✅ --- 🚦 Secret 2 — Built-in API Throttling Most developers build custom rate limiting from scratch. DRF already has it built in — and almost nobody uses it. 3 types: → AnonRateThrottle — for unauthenticated users → UserRateThrottle — for authenticated users → ScopedRateThrottle — per specific endpoint REST_FRAMEWORK = { 'DEFAULT_THROTTLE_RATES': { 'anon': '100/day', 'user': '1000/day' } } One setting. Full rate limiting. No extra code. 🔥 --- 📄 Secret 3 — 3 Types of Pagination (Nobody Uses Them All) Most developers only know PageNumberPagination. But DRF has 3: → PageNumberPagination — classic page=1, page=2 → LimitOffsetPagination — ?limit=10&offset=20 — perfect for infinite scroll → CursorPagination — most secure, uses encrypted cursor — perfect for real-time feeds I switched one client's API from PageNumber to Cursor — and their feed felt 10x smoother instantly. 🚀 --- 🎯 Secret 4 — @action Decorator on ViewSets Most developers create separate APIView for every custom endpoint. Messy. Repetitive. Unnecessary. @action lets you add custom endpoints directly inside your ViewSet. class UserViewSet(viewsets.ModelViewSet): @action(detail=True, methods=['post']) def send_welcome_email(self, request, pk=None): user = self.get_object() # send email logic return Response({'status': 'email sent'}) URL: /users/{id}/send_welcome_email/ Clean. Organized. Professional. ✅ --- ⚡ Secret 5 — select_related inside get_queryset Most developers write their queries in the serializer. The result? N+1 queries destroying your API performance. The fix? Override get_queryset in your ViewSet: def get_queryset(self): return Order.objects.select_related( 'user', 'product' ).prefetch_related('items') One change. Query count drops from 47 to 2. ⚡ This is exactly how I reduced a client's API response time from 8 seconds to 800ms. --- These 5 features exist in every DRF project. Most developers never find them. The ones who do — write cleaner code, build faster APIs, and deliver better products. --- Which one did you already know? #Django #DjangoRestFramework #Python #BackendDevelopment #WebDevelopment #FullStackDeveloper #API #SoftwareEngineering #OpenToWork #RemoteWork #ProgrammingTips
To view or add a comment, sign in
-
-
Django ORM Internals and Query Optimization — What Every Backend Developer Should Understand What is Django ORM Really Doing? The Django ORM is an abstraction layer that converts Python code into SQL queries. When you write: books = Book.objects.all() Django does not immediately hit the database. Instead, it creates a QuerySet — a lazy object that represents the SQL query. The actual database call happens only when the data is evaluated. Examples of evaluation: Iterating over QuerySet Converting to list Accessing elements This concept is called lazy loading. How QuerySets Work Internally A QuerySet goes through multiple steps: Query construction Django builds a SQL query internally using a query compiler Optimization It decides joins, filters, and conditions Execution The query is sent to the database Result caching Results are stored to avoid repeated queries This means: Reusing the same QuerySet can save queries Creating new QuerySets repeatedly can hurt performance The Real Problem: N+1 Queries One of the biggest mistakes developers make: books = Book.objects.all() for book in books: print(book.author.name) This creates: 1 query for books N queries for authors This is inefficient and slows down applications at scale. Optimization Techniques 1.select_related() Used for ForeignKey and OneToOne relationships. books = Book.objects.select_related('author') This performs a SQL JOIN and fetches related data in a single query. 2.prefetch_related() Used for ManyToMany or reverse relationships. authors = Author.objects.prefetch_related('books') This runs separate queries but combines results efficiently in Python. 3.only() and defer() Fetch only required fields: Book.objects.only('title') Reduces data transfer and speeds up queries. 4.values() and values_list() Return dictionaries or tuples instead of full model objects: Book.objects.values('title', 'price') Useful for APIs and data-heavy operations. Why This Matters Poor ORM usage leads to: Slow APIs High database load Bad user experience Optimized queries result in: Faster response times Better scalability Efficient resource usage #Python #Django #ORM #BackendFramework #BackendDevelopment #SoftwareDevelopment #QuerySets #SQL #Optimization #Scalable #Fast_API_Response
To view or add a comment, sign in
-
-
Day 130-131 📘 Python Full Stack Journey – Django Query Filtering 🔍 Today I explored how to filter data in Django using QuerySets, which is a powerful way to retrieve specific records from the database. 🚀 🎯 What I learned today: 🔎 Django Filter Queries Used different filtering techniques on the Employee model: startswith → fetch records where a field starts with a value endswith → fetch records where a field ends with a value icontains → case-insensitive search within a field Example: Employee.objects.filter(fullname__startswith='A') 📊 Displaying Filtered Data Passed multiple filtered datasets from views → template Used Django template loops to display results dynamically ⚙️ Multiple Filters in One View Combined multiple queries in a single function Even filtered data from different models in one page 💡 This makes it easy to build features like search, filtering, and categorization in web applications. Django User Profile (One-to-One Relationship) Today I implemented a User Profile system in Django using a One-to-One relationship, taking a big step toward building personalized user experiences. 👤 One-to-One Relationship Created a Profile model linked to Django’s built-in User model Ensured one user → one profile using: user = models.OneToOneField(User, on_delete=models.CASCADE) Understood how Django handles relationships like one-to-one, one-to-many, and many-to-many 🗄️ Profile Model Fields Added fields like: Bio Location Birth date Profile image Used: blank=True and null=True for optional fields ImageField for uploading profile pictures 🌐 Profile Display & Editing Displayed logged-in user details using: {{ request.user.username }} Created a profile page and an edit form page Used get_or_create() to automatically create a profile if it doesn’t exist 🔐 Access Control Used @login_required decorator to restrict access to logged-in users only 📸 Handling File Uploads Used enctype="multipart/form-data" for image uploads Displayed uploaded images dynamically in templates This session helped me understand how to build user-specific features and profiles, along with how Django makes data retrieval flexible and efficient—both of which are essential for real-world applications like social platforms, search systems, and dashboards. Excited to keep building more personalized and dynamic applications while exploring advanced queries next! 💻✨ #Django #Python #FullStackDevelopment #WebDevelopment #Backend #BackendDevelopment #Database #QuerySets #UserProfile #CodingJourney #LearningToCode #Upskilling #ContinuousLearning
To view or add a comment, sign in
-
-
🔥 My client was losing customers every single day. The reason? Their Django API took 8–10 seconds to respond. I fixed it in under 800ms. Here's exactly how 👇 --- First, let me paint the picture. Imagine you open an app. You wait... 3 seconds. You wait... 6 seconds. You wait... 9 seconds. You close it and never come back. That's what was happening to my client's users — every single day. They came to me desperate. I got to work. --- Here's the exact 5-step process I used: ⚡ Step 1 — Stop guessing, find the real problem I see many developers randomly "optimizing" without knowing what's actually slow. I used Django Debug Toolbar first. The result shocked even me. 👉 1 single API call was firing 47 separate database queries. 47. For ONE request. Classic N+1 problem — and it was silently killing performance. --- ⚡ Step 2 — Kill the N+1 with 2 lines of code select_related() and prefetch_related() — two of the most underused tools in Django. Before: 47 database hits per request After: 2 database hits per request Response time: 8 seconds → 2 seconds Just from 2 lines of code. 🤯 --- ⚡ Step 3 — Database indexes (the most ignored optimization) The queries were doing full table scans. On a table with 2,000,000+ records. Every. Single. Request. Added composite indexes on the right columns. Query execution time dropped by 60% overnight. --- ⚡ Step 4 — Redis caching for the win Some data never changed — but the app was fetching it from the database on every request. That's like googling your own phone number every time you need to call yourself. Cached it in Redis with a 15-minute TTL. Result → endpoints now respond in under 50ms. ⚡ --- ⚡ Step 5 — DRF Serializer was the hidden villain The serializer was fetching 40+ fields. The API only needed 8. Used only() and defer() to fetch exactly what was needed. Nothing more. Nothing less. --- Final result? ❌ Before: 8–10 seconds ✅ After: under 800ms A client who was losing users every day — now has a fast, reliable product. --- The real lesson here? Most Django performance problems are NOT about Django. They are about developers who never question their own code. Debug first. Optimize second. Never guess. --- Are you dealing with slow APIs right now? Drop your biggest Django performance challenge in the comments — I read every single one. 👇 #Python #Django #WebDevelopment #BackendDevelopment #FullStackDeveloper #PerformanceOptimization #OpenToWork #RemoteWork #SoftwareEngineering #AWS
To view or add a comment, sign in
-
-
Django 6.0 introduces a built-in task framework that replaces Celery entirely for InboxToKindle (https://inboxtokindle.com). This new approach eliminates the need for Redis, RabbitMQ, or a separate worker process, as tasks are now managed within PostgreSQL alongside other data. I have detailed how this framework operates in production, covering the pipeline, the self-rescheduling technique for periodic tasks, and when it may still be appropriate to use Celery. In summary, most Django projects require background tasks that function effectively. You can read more about it here: https://lnkd.in/eGWyjQ_U What is your current setup? Are you still using Celery, or have you explored lighter alternatives? #django #python #celery #webdevelopment #backend
To view or add a comment, sign in
-
You can be picky with what runs in your GitHub Actions workflow.... And I am not talking about using Bash or Python scripts. I mean using two keywords built into GitHub Actions: `needs` and `if`. This, I think, is the best way I can explain what they do: - `needs`: Jobs in GitHub Actions are independent by default. To chain them together, manipulate the order you want your workflow to run, you have to use `needs`. You can use it to create job dependencies, similar to Terraform's `depends_on`. For example, in my `build` workflow script, `build-backend` is dependent on `changes` and can only run after `changes` is done. You can specify multiple jobs in needs. - `if`: This takes `needs` a step further. `needs` is also like a dictionary. It holds metadata from jobs specified in it, like the results of a job run. You can use results (success, skipped, failure, etc) to set `if` conditions in another job. Using `needs` alone, a job only runs if the job specified in `needs` was a success. Using `needs` with `if`, you cannot only make the job run regardless of the results of jobs specified in `needs`, you can also set complex (not complicated, except you overdo it) condition logic using the results of multiple previous jobs to manipulate a particular job's behavior. With that out of the way, let's jump to today's tips. In the previous post, I mentioned a "subsequent-updates" workflow I use that runs only what is needed. Performing operations on only the parts relevant to changes committed. Here is how I do it: 1. Path-based Filtering: Using a "change detector" job at the start of the workflow, I can record which folders (like my backend/, frontend/, or terraform/ directories) actually have new code. It then stores this as an "output" for the rest of the workflow to use. You can find the action on GitHub's Marketplace: https://lnkd.in/ekT-mbeh Note: The action I linked is not by a GitHub verified publisher but you can, at least, get an idea of what path-filtering can look like. 2. Conditional Execution: Instead of running everything, my jobs now have gatekeepers. Using if conditions with result values I can get from needs, I can make sure a job that builds, runs, scans and tests my backend container image (build-backend) only runs if the backend/ folder was updated. ```yaml build-backend: needs: changes if: ${{ needs.changes.outputs.backend == 'true' }} ``` Path-based filtering and, of course, properly separating operations into jobs is vital for this to work. Using these tips makes sure my workflows don't spend more time than they should but there is more I did to make them even faster. Next, I'll be covering caching and parallel jobs. To see how I am using these practices, here is a link to the workflows in my most recent project: https://lnkd.in/eYzWMTHp
To view or add a comment, sign in
-
-
I have recently been reading Swizec Teller's new book Scaling Fast and in it he mentions architectural complexity, which reminded me of my desire for a tool that combines database dependencies between Django apps and import dependencies between Django apps. To date, I have used other tools such as graph models from Django extensions, import-linter is the most recent one, and pyreverse from Pylint. They all do bits of the job, but require manual stitching together to get a cohesive graph of everything overlaid in the right way. So I remembered about this, and so over the last couple of days, I've built a new package which combines all of this into a live view which updates as you build your app, a management command and a panel for Debug Toolbar. Why the Django app level, you ask? Primarily, I do find models good, but they can get a little too complicated and a little you get a few too many lines and doing imports at the module level within an app or like separating it all out, again, you lose it becomes there becomes too much noise to signal to really understand the logical relationship between different components in the system. I like to think that Django apps naturally represent logical representations of different parts of a project or a system. A project obviously is too large unless you're dealing with multiple projects, but within a single Django project, it's a good representation to have an app deal with one thing. You can I know you can structure Django projects & apps in many ways. So it'd be interesting to see this tool used on other's project structures that aren't one app for a single logical component. So without further ado, here is Django Dependency Map , which combines output from Django extensions graph_models and grimp, which is used by import-linter to dynamically map the dependencies between your different apps and third-party apps. Initially, it was a management command, which then outputs a HTML file, which exists. I then added that into a live view, and there's an integration into Django debug toolbar. The live map page has the following features: you can hide nodes and kind of see how the dependencies change. force graph & hierarchical graph representation, Detailed information on a single app and its relationships import cycle detection import violations from import-linter Debug toolbar panel Export of the graph to mermaid & dot formats My hope is twofold. One, it might reveal things about your projects that you didn't know about in terms of how fit how interlinked things are. And secondly, I hope it may change the way you build your Django apps. I'm hoping to have it open as another tab and just to watch as I'm building things to make sure out as I'm and maybe as an agent's building things see use it as a sense check of if it's doing things right or as I expect it to in terms of overall architecture rather than at the code level. The pypi package is coming very soon, but you can visit the repo here: https:/
To view or add a comment, sign in
-
Django is consistently ranked as one of the top web frameworks globally, specifically dominating the Python ecosystem and backend development. As of 2025/2026, its position among frameworks is as follows: Leader in Python Web Development: Django remains the primary "batteries-included" choice for Python developers, frequently ranking alongside Flask and FastAPI as the top three Python frameworks. Top 5 Worldwide "Most Wanted": It is recently ranked as the 4th most wanted framework for web development in global developer surveys. Preferred by 74% of Python Web Developers: According to the Django Developer Survey 2024, roughly 74% of developers in the Python space still prefer Django for full-stack and API development. High Performance for Enterprise: It is classified as one of the top 7 backend frameworks globally across all languages (competing with Laravel, Spring Boot, and Express) due to its scalability and robust security. Widely Adopted by Tech Giants: Django powers major platforms including Instagram, Spotify, YouTube, and Pinterest, maintaining its status as a proven, production-ready framework. While FastAPI is growing in popularity for high-performance microservices, Django's extensive built-in features (admin panel, ORM, and authentication) keep it at the top for rapid, secure application development.
To view or add a comment, sign in
-
-
Building Scalable Web Applications with Python, Django, Fast API, PostgreSQL, and Amazon Web Services, In today’s digital landscape, building scalable and high-performance web applications requires a well-integrated technology stack. A powerful combination widely adopted by modern development teams includes Python as the core programming language, along with Django and Fast API for backend development, PostgreSQL for reliable data management, and Amazon Web Services for cloud deployment and scalability. This combination enables organisations to build applications that are not only efficient but also flexible and future-ready. Python plays a central role in this stack due to its simplicity, readability, and extensive ecosystem. It allows developers to write clean and maintainable code while accelerating development timelines. Its versatility makes it suitable for a wide range of applications, from web development to data processing and automation, making it a preferred choice for modern software development. For backend development, Django and Fast API serve complementary purposes. Django is a high-level framework designed for rapid development and comes with built-in features such as authentication, an admin panel, and an object-relational mapper, making it ideal for building structured and secure applications. Data management is handled efficiently by PostgreSQL, a powerful open-source relational database system known for its reliability and advanced capabilities. It supports complex queries, ensures data integrity through ACID compliance, and integrates seamlessly with both Django and Fast API. This makes it an excellent choice for applications that require consistent and scalable data handling. When combined, this technology stack creates a robust architecture where the frontend communicates with backend services powered by Django and Fast API, data is securely stored and managed in PostgreSQL, and the entire application is deployed and scaled using AWS. This integrated approach ensures high performance, scalability, and reliability across all layers of the application. In real-world scenarios, such as building an e-commerce platform, Django can handle user authentication and administrative operations, Fast API can manage real-time services like order processing and payment integration, PostgreSQL can store transactional data, and AWS can ensure the application remains available and scalable under varying loads. This synergy allows development teams to deliver high-quality applications that meet modern user expectations. Overall, combining Python with Django, Fast API, PostgreSQL, and Amazon Web Services provides a strong foundation for building scalable, secure, and high-performing applications, making it an ideal choice for modern full-stack development.
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development