Django doesn't store what's passed to cache.set(). It stores a transformed version of it. The reality is Django serializes everything before storage and that serialization has consequences most engineers never consider. Here's what actually happens: 1. Before any value reaches the backend, Django serializes it using pickle. 2. Not JSON. Not a string representation. Python's pickle, a binary serialization of the entire object graph. 3. This is why caching a model instance, a queryset result or a complex nested object just works. But pickle is also why a network-exposed cache is a critical security vulnerability. Here is how - -> Pickle deserialization executes arbitrary Python code. That's not a bug, it's how pickle reconstructs complex objects. -> An attacker who can write to the cache can craft a malicious pickle payload. -> When Django deserializes it and that code runs. On the server. With full application privileges. Precautions for avoiding this attack - - Redis and Memcached should never be publicly accessible, bind to localhost or a private network only. - Use Redis AUTH or TLS for any cache that travels over a network. They're not enabled by default in Django. - django-redis supports pluggable serializers. Replace pickle with msgpack or a custom JSON encoder. Safer, often faster. The cache feels invisible until it isn't. Treat it like any other network service that touches application data. Has cache security ever been part of a security review in your stack or is it assumed safe by default? #Python #Django #BackendDevelopment #SoftwareEngineering
Django Cache Security Risks with Pickle Serialization
More Relevant Posts
-
You don't always need Redis. Here's a rate limiter I built in 40 lines of pure Python. No django-ratelimit. No external dependencies. Just a sliding window algorithm, a dictionary, and timestamps. The core idea: → Every request logs a timestamp against a client key → On each new request, prune anything older than your window → If the remaining count hits your limit — return 429 → Otherwise, log it and let it through That's the entire algorithm. I plugged it into Django middleware and a per-view decorator so you can control limits at both levels. I've used Redis-backed rate limiters in production. They're great when you need them. But for a personal project, an internal tool, or a lightweight API — this does the job without the infrastructure overhead. Full implementation + Django integration on my Medium. Also covers the one IP extraction detail that will silently break your limiter if you're behind a proxy — worth checking even if you're using a library. #Python #Django #BackendDevelopment #SoftwareEngineering #WebDevelopment
To view or add a comment, sign in
-
You don't need Node.js for real-time applications. This is the assumption I challenged when I required WebSocket support in a Django project. The common advice suggests adding a Node service for the real-time components, which introduces a second language, a second deployment pipeline, and an additional point of failure. Django Channels combined with Redis demonstrated that this approach is unnecessary. Here’s the setup: - Daphne replaces Gunicorn as the ASGI server, managing both HTTP and WebSocket on the same process. - ProtocolTypeRouter effectively splits traffic: HTTP requests are directed to Django, while WebSocket connections are handled by Channels consumers. - Redis acts as a message broker and serves as the channel layer, enabling pub/sub functionality across all connected consumers. - Consumers are async Python classes with methods like receive(), group_send(), and disconnect(). The outcome is that a message sent by one client reaches Redis, propagates to every consumer in the group, and connects with all clients, without leaving the Python ecosystem. No Node. No socket.io. No separate service to maintain. Everything operates within the same Docker container as the rest of the backend, utilizing the same codebase, deployments, and logs. Sometimes, the seemingly boring choice is actually the most intelligent one. #django #python #webdev #backend #software #architecture
To view or add a comment, sign in
-
-
I reduced my API response time from 2.3s to 140ms. No Redis. No CDN. No caching layer. Just 4 changes to my Django REST Framework setup that most tutorials never mention. N+1 queries everywhere. My serializer accessed post.author.name on every row. 100 posts = 101 database queries. One select_related('author') brought it down to 1. Response time: 2.3s to 800ms instantly. Using ModelSerializer for read endpoints. ModelSerializer builds fields dynamically on every request. It's up to 377x slower than raw Python dicts. Switched read-only endpoints to serializers.Serializer with explicit fields. Another 40% gone. No pagination on list endpoints. Returning the entire table. 10,000 rows. Every request. Added CursorPagination, constant-time queries regardless of dataset size. OFFSET-based pagination breaks at high page numbers. Cursor doesn't. Fetching fields I never used. Serializer returned 15 fields. Frontend used 6. Added .only() and trimmed the serializer. 2.3s to 140ms. Same server. Same database. Same $12/month VPS. The bottleneck was never my infrastructure. It was my code. Run queryset.explain(analyze=True) on your slowest endpoint. You'll probably find the same mistakes. Which of these have you tried? #Django #Python #API #WebPerformance #BuildInPublic
To view or add a comment, sign in
-
In many Django + DRF projects, the same security and configuration issues show up again and again during PR reviews. To address this, I built django-security-hunter — a lightweight CLI tool that surfaces common security risks and Django/DRF misconfigurations before code reaches production. It’s designed for teams that want automated checks in local development and CI, not just during review. Coverage (high level): • Settings & DRF: production Django settings and REST framework defaults / API exposure hints (when you pass --settings so Django loads). • Code & templates: risky patterns — XSS-style footguns, SSRF heuristics, unsafe deserialization, secrets in logs, hardcoded secret-like names, and SQL-injection heuristics. • Reliability / performance hints: concurrency and ORM-style patterns where applicable rules fire. • Optional: pip-audit, Bandit, and Semgrep when enabled in config or environment (external tools may need to be installed and on your PATH). See docs/rules.md in the repository for details and rule IDs — findings are heuristic, so please triage before changing code or configuration. Product features: • CLI-first with CI-friendly exit codes • SARIF output (GitHub Code Scanning integration) • GitHub Action available on the Marketplace Quick start: pip install django-security-hunter django_security_hunter scan -p . -s yourproject.settings -y -f console Use the same --settings value as DJANGO_SETTINGS_MODULE so settings-based rules (Django + DRF) run; many file-based checks still run without it. Goal: make security checks faster and part of everyday development. Note: Static analysis can produce false positives — always verify findings before taking action. -Found a bug or potential security issue in the tool? Please open an issue in the repository. -Contributions are welcome — PRs, issues, and feedback help improve the tool for everyone. Repo: https://lnkd.in/g3vd_RqU PyPI: https://lnkd.in/gkFDFAKt #Django #DRF #CyberSecurity #Python #OpenSource #DevTools #Backend #100DaysOfCode
To view or add a comment, sign in
-
-
bulk_create and bulk_update don't behave like regular Django saves. Most developers find out the hard way or never realise it! The assumption - they're just faster versions of calling .save() in a loop. Same behaviour, better performance. save() on a single instance does several things - 1. runs pre_save and post_save signals 2. calls full_clean() for validation, handles auto-generated fields 3. returns the saved instance with its new primary key bulk_create and bulk_update bypass all of it. No signals. No validation. No per-instance hooks. Django hands a list of objects directly to the database and walks away. bulk_create - the PK problem ~ By default, bulk_create returns instances without primary keys populated(in Python object) - unless update_conflicts or returning_fields is explicitly set. ~ ignore_conflicts=True silently swallows insert failures - no exception, no log, no signal. A uniqueness violation disappears without a trace. bulk_update - what it can't do ~ bulk_update requires an explicit list of fields. Miss a field - it doesn't update. ~ It cannot update fields using expressions - no F(), no computed values. ~ And like bulk_create - no post_save signals fire. Anything listening for model changes never knows. The performance gain is real - 1000 inserts in one query vs 1000 round trips. But the tradeoffs are real too. Takeaway — -> bulk_create / bulk_update - no signals, no validation, no per-instance hooks -> bulk_create → PKs not populated in Python object by default on PostgreSQL, not at all on MySQL -> ignore_conflicts=True → silent failure, uniqueness violations disappear without exception -> bulk_update → explicit fields only, no F() expressions, missed fields silently skip Have you been bitten by missing signals after a bulk operation? How did you handle downstream consistency? #Python #Django #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
Your queryset works. But your database is doing the heavy lifting. Most Django developers treat QuerySets like simple Python objects. Chain filters, loop over results, and ship it. But under the hood, every queryset translates into SQL executed by PostgreSQL (or your database). Inefficient queries, N+1 problems, and unnecessary evaluations don’t show up in code, they show up in performance, latency, and load. The real issue is not writing queries. It’s understanding when they execute, how many times they execute, and what SQL they generate. Methods like select_related, prefetch_related, and proper indexing are not optimizations—they are baseline requirements once your data grows. Ignoring them turns a working API into a slow system under real traffic. If you don’t know what SQL your queryset generates, are you really in control of your backend? #Django #QuerySet #BackendEngineering #Performance
To view or add a comment, sign in
-
-
FastAPI vs Django: Which Wins in 2026? The best framework isn’t the one with the most features—it’s the one that solves your specific bottleneck. If you’re choosing for your next project, here’s a breakdown: Why the industry is shifting to FastAPI - Performance: Built on Starlette and Pydantic, it’s one of the fastest Python - - frameworks. Ideal for high-concurrency or I/O-bound tasks. - Developer velocity: Automatic OpenAPI (Swagger) docs and Python type hints reduce time spent on documentation. - Modern stack: Designed for microservices, AI/ML deployments, and modern frontends like React or Next.js. Why Django isn’t going anywhere - Batteries included: Comes with admin panel, authentication, and ORM out of the box. - Security: Built-in protection against common web vulnerabilities. - Stability: Strong structure for large-scale monoliths and enterprise applications. The verdict Use FastAPI if you want a high-performance engine for modern APIs or microservices. Use Django if you need a fully equipped framework for complex, data-heavy applications. Which side are you on? Are we moving toward a “FastAPI-first” world, or does Django’s ecosystem still reign supreme? #Python #WebDevelopment #FastAPI #Django #SoftwareEngineering #Backend #CodingTips
To view or add a comment, sign in
-
Most Django middleware is written with one method. Django actually offers five. Here's what Django does: 1. Django doesn't call middleware as a single function. It calls specific hooks at specific moments in the request lifecycle. 2. Each hook with a different purpose, different data available and different consequences for what gets returned. a. process_request(request): - Fires after the request object is built - before URL resolution. - The view hasn't been identified yet. Return a response here and URL resolution never happens, view never runs. - Use for: blanket request rejection, IP blocking, early authentication checks. b. process_view(request, view_func, view_args, view_kwargs): - Fires after URL resolution - the view function is now known, but not yet called. - Full access to the view function itself and its arguments. Return a response here and view never executes, but process_response still runs on the way out. - Use for: view-specific logic, caching c. process_response(request, response): - Fires after the view returns - always, regardless of what happened upstream. - Must always return a response, returning None here raises an exception. - Use for: modifying headers, injecting content, logging response metadata. d. process_exception(request, exception): - Fires only when the view raises an unhandled exception. - Return a response and exception is handled, process_response runs normally. - Return None and exception propagates to the next middleware's process_exception. Knowing which hook to use is the difference between middleware that works and middleware that works until it doesn't. Have you ever had a middleware silently break something downstream? #Python #Django #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
Django 6.0 introduces a built-in task framework that replaces Celery entirely for InboxToKindle (https://inboxtokindle.com). This new approach eliminates the need for Redis, RabbitMQ, or a separate worker process, as tasks are now managed within PostgreSQL alongside other data. I have detailed how this framework operates in production, covering the pipeline, the self-rescheduling technique for periodic tasks, and when it may still be appropriate to use Celery. In summary, most Django projects require background tasks that function effectively. You can read more about it here: https://lnkd.in/eGWyjQ_U What is your current setup? Are you still using Celery, or have you explored lighter alternatives? #django #python #celery #webdevelopment #backend
To view or add a comment, sign in
-
cache.get('user_123') never touches Redis directly. Something else runs first. The assumption is Django's cache API is a thin wrapper. Call get, retrieve from storage and done. The reality is that every cache call passes through a pipeline before a single byte touches the backend. Here's what happens: 1. Every cache backend in Django inherits from BaseCache. BaseCache owns the entire caching contract - get, set, delete, incr, get_or_set. 2. The concrete backend like RedisCache, MemcachedCache implements only the storage-specific parts. The abstraction layer runs first. Always! 3. The first thing BaseCache does is transform the key. 4. Every key passes through make_key() — which prepends KEY_PREFIX and VERSION from settings. The key 'user_123' becomes ':1:user_123' in storage by default. Cache miss on a key that exists → check the actual key in storage first! Django's cache versioning lets the entire cache be invalidated by bumping VERSION in settings. Old keys still exist in storage, until eviction clears them. The cache API feels simple because BaseCache is doing the hard work invisibly. Have you ever debugged a cache miss only to find the key was there, just under a different name? #Python #Django #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development