Database transactions don't prevent two requests from reading the same row simultaneously. Wrapping code in atomic() makes it safe from concurrent modification. That assumption is wrong in a specific and destructive way. atomic() alone doesn't prevent race conditions. It only guarantees atomicity - all or nothing. It says nothing about what other transactions can read while yours is running. select_for_update() is what actually locks the row. How select_for_update() works - 1. When Django executes SELECT ... FOR UPDATE - the database places a lock on that row. 2. Any other transaction attempting to read the same row with select_for_update() blocks - it waits until the first transaction commits or rolls back. 3. Only then does the second transaction proceed - with the updated value. No simultaneous reads. No conflicting writes. The real Catch! select_for_update() must be inside an atomic() block. Always. Outside a transaction - most databases silently ignore the lock entirely. No error. No warning. No lock. nowait and skip_locked can be used for controlling wait behaviour. nowait=True -> if the row is already locked, raise an exception immediately instead of waiting skip_locked=True -> if the row is locked, skip it and move on. Useful for task queues where any available row is acceptable. Takeaway - -> Transactions tell the database - treat these operations as one. -> Locks tell the database - treat this row as mine until I'm done. -> Both are needed. Neither replaces the other. What concurrency bug has cost you the most debugging time - race condition, deadlock, or something else entirely? #Python #Django #BackendDevelopment #SoftwareEngineering
Understanding select_for_update() in Django for Concurrency Control
More Relevant Posts
-
🗓️ Release Notes — April 27, 2026 🔎 Span attribute filtering across the stack Pass attribute filters from the Python client, TypeScript client, REST API, or CLI. Type-aware (int/float/bool/str match their stored types). Filters are ANDed together. https://lnkd.in/ecJ-PQK9 🔐 Secrets settings page Admins can add, replace, delete, and search encrypted LLM provider credentials (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) directly in the UI—no REST calls. https://lnkd.in/eynXBPT2 🧪 Claude Opus 4.7 in the Playground https://lnkd.in/gwuzYJTk 🧬 `trace_id` in experiment evaluators Add a `trace_id` kwarg to any evaluator and Phoenix passes the originating trace ID for each run. Works sync/async, function- or class-based. Useful for trajectory evals. https://lnkd.in/e6_resgS ☁️ Azure Managed Identity for PostgreSQL Connect Phoenix to Azure Database for PostgreSQL with Entra managed identity—no static DB password required. https://lnkd.in/euepx9-9 📝 CLI span notes Add notes via `px span add-note <span-id> --text "..."`, and include notes with `--include-notes` on `px span list` / `px trace get`. https://lnkd.in/euzV22dy 📌 Full release notes https://lnkd.in/eFvGJ_Cy
To view or add a comment, sign in
-
🚀 𝗤𝘂𝗶𝘇 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗔𝗣𝗜 – 𝗕𝘂𝗶𝗹𝘁 𝘄𝗶𝘁𝗵 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 I recently built a backend system for a Quiz Application using modern Python backend technologies. 🔧 𝗧𝗲𝗰𝗵 𝗦𝘁𝗮𝗰𝗸: • FastAPI (High-performance API framework) • SQLAlchemy (ORM for database management) • PostgreSQL (Relational database) • Pydantic (Data validation & schema handling) 📌 𝗞𝗲𝘆 𝗙𝗲𝗮𝘁𝘂𝗿𝗲𝘀: • RESTful API endpoints for questions and choices • One-to-many relationship between Questions and Choices • Secure database session handling with dependency injection • Proper request validation using Pydantic models • Clean and scalable backend architecture 🔗 𝗔𝗣𝗜 𝗘𝗻𝗱𝗽𝗼𝗶𝗻𝘁𝘀: • GET /questions/{question_id} → Fetch a specific question • GET /choices/{question_id} → Fetch all choices for a question • POST /questions → Create a question with multiple choices 🧠 𝗪𝗵𝗮𝘁 𝗜 𝗟𝗲𝗮𝗿𝗻𝗲𝗱: • How FastAPI handles async backend development efficiently • Working with SQLAlchemy ORM for relational data modeling • Designing clean backend architecture with separation of concerns • Implementing database relationships and migrations logic 💻 𝗚𝗶𝘁𝗛𝘂𝗯 𝗥𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆: 👉 https://lnkd.in/dHJczetV This project helped me strengthen my understanding of backend development, API design, and database integration. #FastAPI #Python #BackendDevelopment #APIs #SQLAlchemy #PostgreSQL #SoftwareEngineering #LearningByBuilding
To view or add a comment, sign in
-
-
# NeoSQLite is now 3x faster than MongoDB on the same hardware But the real story isn't just the numbers—it's how we got here. ## From Python Fallbacks to SQL-Native: A 12-Month Journey When we started building NeoSQLite, we took a "get it working first" approach. Complex aggregation operations like `$in`, `$nin`, `$elemMatch`, and `$project` were handled by Python fallbacks—meaning we'd fetch ALL documents from SQLite, then filter them in Python. It worked, but it was slow. Then we started dogfooding with **Neo-Bloggy** (our blogging platform that runs entirely on NeoSQLite instead of MongoDB). Production usage revealed the pain points real users would face. ## The SQL-Tier Revolution (v1.14.x series) Over the last 6 releases, we systematically moved operations from Python into native SQL: **v1.14.0** — Moved `$project` stage to SQL-tier (no more loading full documents just to project 2 fields) **v1.14.9-10** — Fixed `$elemMatch` and `$in`/`$nin` on array fields. Instead of returning 0 results or unfiltered documents, they now use proper SQL CTE patterns with `json_each()` **v1.14.11** — Added native regex operators (`$regexMatch`, `$regexFind`) directly in SQL tier using custom SQLite functions. Array operators got **10-100x speedup** with CTE patterns **v1.14.12** — Fixed the "malformed JSON" edge case (because even SQLite has its quirks with `json_each()` syntax!) ## The NX-27017 Milestone In v1.13.0, we shipped something unexpected—a **MongoDB Wire Protocol Server** that lets PyMongo connect directly to SQLite. No code changes needed. This isn't just an API clone; it's full wire protocol compatibility with 100% test parity against real MongoDB. ## What This Means - **3x faster** than MongoDB for typical operations - **30-300x faster** for index operations (SQLite's B-trees are fast) - **Zero network overhead** — embedded database, embedded performance - **Drop-in replacement** — existing PyMongo code works unchanged ## The Lesson Building a database isn't about getting the API right. It's about getting the execution model right. Every time we pushed logic from Python down to SQL, we got closer to SQLite's raw performance while maintaining MongoDB's developer experience. The 3x number isn't theoretical—it's measured against a real MongoDB instance in our CI pipeline, running 54 different operation categories across 10 iterations each. Want to try it? ```bash pip install neosqlite ``` Or check out the discussion: https://lnkd.in/gAdPAeCc
To view or add a comment, sign in
-
Stop the Race: Solving Data Inconsistency in Concurrent Systems Building a "working" application is easy. Building a reliable one is hard. I recently spent time diving into the world of Concurrency and Data Integrity using Python and SQL. One of the most common (and dangerous) bugs in software is the "Race Condition"—where two processes try to update the same data at the same time, leading to "lost updates" and corrupted balances. I simulated a high-traffic banking system to see how data inconsistency happens and, more importantly, how to stop it. The Solution: A Two-Pronged Defense Application-Level Locking: Using Python’s threading.Lock to create "Mutual Exclusion" (Mutex). This ensures that only one thread can access the critical "Read-Modify-Write" logic at a time. Database-Level Integrity (ACID): Moving the logic into a relational database (PostgreSQL/SQLite) to leverage Atomicity and Isolation. By using BEGIN, FOR UPDATE, and COMMIT statements, the database acts as the ultimate gatekeeper for data truth. Key Takeaways: Transactions are Non-Negotiable: If it’s not Atomic (all-or-nothing), it’s not safe. The "with" Statement is a Lifesaver: Using context managers in Python ensures locks are released even if the code crashes, preventing deadlocks. Scalability Matters: While local locks work for one server, ACID-compliant databases are essential for distributed systems. Check out the snippet of my GitHub Codespaces setup below! https://lnkd.in/eguenR7g #Python #SoftwareEngineering #SQL #Database #Coding #DataIntegrity #BackendDevelopment #GitHub
To view or add a comment, sign in
-
-
Today I completed a major upgrade to my Inventory Management API — moving from an in-memory CRUD system to a fully database-backed backend using Flask and MySQL. What started as a simple API using Python dictionaries evolved into a structured backend system with: Layered architecture (routes, service, storage) MySQL integration using mysql-connector-python Dynamic update handling Input validation and field control Proper HTTP status codes Unique constraint handling with conflict responses The most important learning was not writing SQL or Flask routes — it was understanding how to design systems: Why database operations don’t behave like in-memory structures How to safely execute queries using parameterized statements Why constraints (like unique fields) must be handled at both database and application level How small design decisions (like structure) make future changes easier One key takeaway: Refactoring early made a huge difference. Because I separated routes, service, and storage layers, replacing the in-memory storage with MySQL was smooth instead of painful. What I initially thought would take multiple days, I was able to complete in a single focused session — not because it was easy, but because the foundation was strong. Next steps: Add authentication (user/admin roles) Deploy the API Build more backend-focused systems This project marks a shift for me — from learning syntax to building real backend systems. https://lnkd.in/gkPHWzPB #BackendDevelopment #Python #Flask #MySQL #APIs #LearningByBuilding
To view or add a comment, sign in
-
🚀 Built my first CRUD API using FastAPI + MySQL and deployed it on Render! 🌐 Live URL: https://lnkd.in/gHKJaCXx Today I created a REST API with full CRUD operations using Python (FastAPI) and MySQL as the database, and deployed it using Render. What I built: ✔ GET → Read data from MySQL ✔ POST → Insert data into MySQL ✔ PUT → Update existing records ✔ DELETE → Remove records from MySQL Deployment: 🌐 Hosted on Render This project helped me understand how backend systems work in real-world applications—from API design to database integration and deployment. Key learnings: - REST API design principles - CRUD operations with MySQL - FastAPI backend development - Deploying applications on Render It’s a simple project, but it reflects real-world backend architecture. Next step: add authentication and improve security. #Python #FastAPI #MySQL #CRUD #BackendDevelopment #Render #Deployment #RESTAPI
To view or add a comment, sign in
-
-
⁉️ So, Claude Code's "Source Code" leaked, and it changed the post I was about to make as I wanted to discuss this literally a few days ago. First, though this potentially gives competitors a look behind the curtain, the meat of what makes Claude Code what it is are the MODELS powering it. That source code didn't leak, because there is no source code. Only data, lots of compute, and training. What was actually interesting in this leak were the instructions guiding the model, which were fairly bare bones all things considered. Nothing that prompt engineers weren't already doing when setting up their own systems to get Claude, or any other model, to do whatever their wrapper's goal was. The most interesting thing though was the Python file containing tools for how to use BASH. This was most likely written by Claude itself, which is almost certainly what Anthropic's CEO means when he says Claude is writing itself. Even that shouldn't have surprised anyone, and I'm genuinely shocked I haven't seen more people discussing it. I actually brought this upat SONODAY on Friday (when I should have posted about this). If you are a heavy Claude Code user, familiar with command line tooling, and have a technical understanding of LLMs, what Claude Code was doing was fairly obvious. Every prompt triggered tools like grep to locate words and intents from your prompt. It picked out keywords from your prompt, checked them against its memory of your project, 100% a markdown file it creates as we know now, which is just language, then found the best terms to GREP for, grabbed the context, combined it back with your original prompt, and output code that fit both what the project IS and what you WANT. When I noticed this, I changed how I worked with Claude. My job as the manager was to guide it using language, because language is how it works. Specific phrases, words, and ideas, repeated consistently, because that is exactly what it was grepping for. A real example: when building https://listen.sonoday.com, I had Claude name a component "The Stage" and add that in comments throughout the code. Whenever I wanted to work on it, I just typed "The Stage" into the prompt and we had a shared language. Other developers are attempting to describe what they want, ignoring helping themselves and claude work in the future... instead of creating a shared language for the project. When you understood how Claude Code was actually working, you can now direct it effectively! So no, I am completely unsurprised that the leak showed heavy BASH and command line tooling. That was there if you were observent. What does surprise me is how many people seemingly missed it, or just don't want to talk about it favoring narratives about how AI will totally replace us. What do you think of the leak, and will it change how you use Claude, AI models, or anything else? 🤔 I should really write this type of long form stuff on my personal website don't you think?
To view or add a comment, sign in
-
Part 1: Architecture & Real-World System Design Modern backend systems don’t break because of scale alone — they break due to complexity. In a recent redesign, the focus was on simplifying the handling of large, dynamic form data while improving performance, maintainability, and the developer experience. 📊 The shift: 🔹 From rigid column-based schema → flexible JSONB-based storage 🔹 From heavy raw SQL → clean ORM-driven queries 🔹 From scattered APIs → structured, minimal endpoints ⚙️ Architecture Improvements ✔️ Modular design using separate Django applications ✔️ Class-based views for reusable and maintainable logic ✔️ API structuring using Django Ninja Router ✔️ Reduced the number of APIs by consolidating responses ✔️ Strong alignment with frontend for payload and contract design 📦 Data Handling Strategy Instead of creating hundreds of columns for dynamic forms: → Stored complete form responses as JSON objects → Handled 300–500+ fields without schema changes → Simplified debugging with structured payloads → Enabled faster iteration without production risks 🔄 Processing Flow User Input → API Validation → Store JSON (status = 0) → Async Processing (Celery + Redis) → Update status = 1 → Dashboard reflects real-time updates 🚀 Outcome ✔️ Reduced schema complexity ✔️ Improved API performance ✔️ Avoided production issues caused by raw queries ✔️ Built a scalable and flexible backend system ✔️ Delivered smoother frontend-backend integration Security handled via JWT-based authentication with proper token flow. Still evolving with improvements in performance, validation, and system design. #BackendEngineering #Django #Python #SystemDesign #PostgreSQL #APIs #Celery #Redis #JWT
To view or add a comment, sign in
-
Most developers equate slow APIs with bad code. However, the issue often lies elsewhere. Consider this scenario: You have a query that appears perfectly fine: SELECT o.id, c.name FROM orders o JOIN customers c ON o.customer_id = c.id Yet, the API is painfully slow. Upon checking the execution plan, you find: NESTED LOOP → TABLE ACCESS FULL ORDERS → INDEX SCAN CUSTOMERS At first glance, this seems acceptable. But here's the reality: for each row in orders, the database is scanning and filtering again. If orders contain 1 million rows, that's 1 million loops. The real issue wasn’t the JOIN; it was the database's execution method. After adding an index: CREATE INDEX idx_orders_date ON orders(created_at); The execution plan changed to: INDEX RANGE SCAN ORDERS → INDEX SCAN CUSTOMERS As a result, query time dropped significantly. Key lessons learned include: • Nested Loop is efficient only when: → the outer table is small → the inner table is indexed • Hash Join is preferable when: → both tables are large → there are no useful indexes • Common performance issues stem from: → full table scans → incorrect join order → missing indexes → outdated statistics A common mistake is this Java code: for (Order o : orders) { o.getCustomer(); } This essentially creates a nested loop at the application level (N+1 query problem). Final takeaway: Don’t just write queries; understand how the database executes them. That's where true performance improvements occur. If you've resolved a slow query using execution plans, sharing your experience would be valuable. #BackendDevelopment #DatabaseOptimization #SQLPerformance #QueryOptimization #SystemDesign #SoftwareEngineering #Java #SpringBoot #APIPerformance #TechLearning #Developers #Coding #PerformanceTuning #Scalability #DistributedSystems #DataEngineering #Debugging #TechTips #LearnInPublic #EngineeringLife
To view or add a comment, sign in
-
✅ #PythonJourney | Day 154 — Test Suite Complete: 14 Tests, 100% Endpoint Coverage Today: Completed the comprehensive test suite. Every API endpoint now has automated tests validating behavior, error handling, and authentication. Key accomplishments: ✅ Full test coverage (14 tests): • Health Check: 1 test • Create URL: 4 tests (success, invalid format, no auth, invalid auth) • List URLs: 3 tests (empty, with data, no auth) • Get URL Details: 2 tests (success, not found) • Delete URL: 2 tests (success, not found) • Get Analytics: 2 tests (success, not found) ✅ Testing patterns implemented: • Fixture-based setup (conftest.py) • Isolated database per test • Mock user creation • Authentication validation • Error condition testing • Status code verification ✅ All edge cases covered: • Valid requests return proper responses • Invalid inputs rejected with 422 • Missing auth returns 401 • Non-existent resources return 404 • Successful deletes return 204 • Analytics properly calculated ✅ Test execution: • 14 passed in 2.51s • Zero flaky tests • All database operations isolated • Clean setup and teardown What I learned today: → Comprehensive testing catches edge cases early → Fixtures reduce boilerplate and improve maintainability → Test isolation prevents hidden dependencies → Fast tests enable rapid development cycles → Good test names document expected behavior The test suite now validates: - ✅ API contract (request/response format) - ✅ Authentication (API key validation) - ✅ Authorization (users see only their data) - ✅ Error handling (proper HTTP status codes) - ✅ Business logic (URL creation, deletion) - ✅ Data persistence (database operations) This is production-grade testing: - Every endpoint tested - Every error case covered - Fast feedback on code changes - Confidence to refactor safely - Documentation through tests Current status: - ✅ Backend: Production-ready - ✅ Tests: 14/14 passing (100%) - ✅ Code coverage: All endpoints - ✅ API: Fully validated - ⏳ Deployment: Next (GCP) From zero to production-grade in 154 days. The backend is ready for real-world use. Next: Deploy to Google Cloud Platform (GCP). #Python #Testing #Pytest #Backend #API #Quality #SoftwareDevelopment #TDD #Production
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development