🌊 VOCÊ USA DATABASE CONNECTION POOL? Você cria conexão a cada request? Pool = reutilizar conexões. 🌊 **Connection Pool:** ```python from sqlalchemy import create_engine from sqlalchemy.pool import QueuePool engine = create_engine( "postgresql://user:pass@localhost/db", poolclass=QueuePool, pool_size=10, max_overflow=20, pool_pre_ping=True ) ``` Benefits: ✅ No connection overhead ✅ Limited connections ✅ Better performance ✅ Resource control Settings: ```python pool_size=10 # Always open max_overflow=20 # Extra when needed pool_timeout=30 # Wait time pool_recycle=3600 # Recycle connections ``` Without pool: ``` Request 1: Connect → Query → Disconnect Request 2: Connect → Query → Disconnect # 100ms overhead per request! ``` With pool: ``` Request 1: Get from pool → Query → Return Request 2: Get from pool → Query → Return # < 1ms! ``` E resultado? 🎯 10x faster 🎯 Less load on DB 🎯 Controlled resources --- Me segue para mais dicas! E aproveita os CUPONS pra joinar a gente: 🔗 https://devopsforlife.io NINJA - 20% OFF: https://lnkd.in/dchtzbWH JEDI - 20% OFF: https://lnkd.in/d9G9R-Ew SUPER SAIYAN - 20% OFF: https://lnkd.in/dtm2Hnj6 --- #devops #database #connectionpool #performance #postgresql #devopsforlife
SQLAlchemy Connection Pooling for PostgreSQL Performance
More Relevant Posts
-
Built a production-style backend system from scratch using FastAPI and PostgreSQL. Core components: REST API with FastAPI PostgreSQL database with SQLAlchemy ORM JWT-based authentication (access + refresh tokens) User management (signup, login, update, delete) File upload and download system with integrity checks (SHA-256) System design: Modular architecture (routers, models, schemas, utils) Separation of concerns across layers Database schema with relationships and constraints Secure password handling with bcrypt What this phase focused on: structuring a backend like a real system, not a script handling data flow from request → database → response enforcing validation and consistency building endpoints that are actually usable and testable GitHub: https://lnkd.in/dEEsGg3G
To view or add a comment, sign in
-
Day 2/60: Production Infrastructure That Actually Scales What Most Developers Do: Start with SQLite. Hardcode credentials. Skip migrations. Write blocking database calls. Wonder why it breaks at 10K users. What I Built Today: ✅ Async SQLAlchemy 2.0 with connection pooling ✅ Docker Compose (PostgreSQL + Redis + Backend) ✅ Alembic migration system with rollback ✅ Database health checks and monitoring ✅ Multi-stage Docker builds (40% smaller images) ✅ Development scripts (init, validate, wait-for-db) ✅ 31 tests, 100% coverage on database layer Technical Decisions: Async Everything: Non-blocking I/O handles 100 concurrent users on single thread Connection Pooling: QueuePool (5+10) for PostgreSQL, NullPool for SQLite Health Checks: pg_isready with retry logic, services wait for dependencies Type Safety: mypy --strict passes, Mapped[T] catches bugs at compile time Architecture Highlight: DatabaseManager singleton manages lifecycle. Session context managers handle transactions. Automatic rollback on errors. Zero connection leaks. Why It Matters: Technical debt is a choice. Building for 10K users from day one means adding workers when growth comes, not rewriting the database layer. What's Working: ``` docker-compose up -d → All services healthy pytest → 31/31 tests passing Database connection → ✅ Validated ``` Metrics: - 11 new files - 1,800 lines of production code - 600 lines of documentation (DATABASE.md) - 100% test coverage on new code - 0 linting errors Day 3 Tomorrow: Database models (User, Organization, Channel, Post). First Alembic migration. Schema design for ML features. Buffer - Building a solid foundation for your API ecosystem. Would love to connect. Repository: https://lnkd.in/g8pdgJvM Medium Blog: https://lnkd.in/gRrs6WaR #BufferIQ #BuildingInPublic #DatabaseEngineering #Docker #Python #PostgreSQL #SQLAlchemy #SoftwareArchitecture #Buffer
To view or add a comment, sign in
-
Today I completed a major upgrade to my Inventory Management API — moving from an in-memory CRUD system to a fully database-backed backend using Flask and MySQL. What started as a simple API using Python dictionaries evolved into a structured backend system with: Layered architecture (routes, service, storage) MySQL integration using mysql-connector-python Dynamic update handling Input validation and field control Proper HTTP status codes Unique constraint handling with conflict responses The most important learning was not writing SQL or Flask routes — it was understanding how to design systems: Why database operations don’t behave like in-memory structures How to safely execute queries using parameterized statements Why constraints (like unique fields) must be handled at both database and application level How small design decisions (like structure) make future changes easier One key takeaway: Refactoring early made a huge difference. Because I separated routes, service, and storage layers, replacing the in-memory storage with MySQL was smooth instead of painful. What I initially thought would take multiple days, I was able to complete in a single focused session — not because it was easy, but because the foundation was strong. Next steps: Add authentication (user/admin roles) Deploy the API Build more backend-focused systems This project marks a shift for me — from learning syntax to building real backend systems. https://lnkd.in/gkPHWzPB #BackendDevelopment #Python #Flask #MySQL #APIs #LearningByBuilding
To view or add a comment, sign in
-
LEETCODE I’m currently deep in the trenches of the SQL 50 challenge, and let’s just say... it’s a mood. One minute, I’m staring at a "Wrong Answer" screen, questioning if I even know what a JOIN is anymore. The logic seems perfect, the syntax is clean, but the test case just won't budge. It’s frustrating, it’s humbling, and it’s a total brain-burn. But then... it happens. You tweak one subquery, add a HAVING clause, or fix a DISTINCT count, and you hit that Submit button. Seeing that green "Accepted" text pop up is a massive hit of dopamine. There’s no feeling quite like beating 78% of other users with a query you just built from scratch. It makes every "Wrong Answer" worth it. What I’m learning through the grind: Edge Cases are everything: The real world (and LeetCode) is messy. Learning to account for NULLs and duplicates is where the real skill is. Subqueries are powerful: Today’s win involved using a subquery within a HAVING clause to match counts—it’s like a puzzle fitting together. Consistency > Speed: I’m pushing for all 50 questions, but I’m making sure I actually understand the "why" behind every solution. Progress isn't a straight line; it’s a series of red errors until you get that one green win. Getting back to the grind now. Only a few more to go! #LeetCode #SQL #DataAnalysis #100DaysOfCode #CodingLife #DopamineHit #ProblemSolving #TechJourney #MySQL
To view or add a comment, sign in
-
-
Most developers optimize SQL queries by guessing. I used to do the same tweak an index here, rewrite a join there, and hope for the best. Then I started actually reading what the database was telling me. EXPLAIN ANALYZE changed how I debug slow queries entirely. Here's what it helps you understand: • How your query is actually being executed • Which indexes are (or aren't) being used • Where the time is really being spent • Why performance silently drops under load The workflow I now follow every time: 1️⃣ Run the query — note the response time 2️⃣ Run EXPLAIN — understand the execution plan 3️⃣ Add indexes or adjust joins based on what you see 4️⃣ Run EXPLAIN ANALYZE — confirm the improvement is real A few things that used to trip me up: → type: ALL means a full table scan — almost always a red flag → key: NULL means no index is being used → rows: 500000 means the DB is scanning way more than it should Database optimization isn't about rewriting everything from scratch. It's about understanding the execution plan and fixing the right thing. I put together a quick reference guide on how different databases (PostgreSQL, MySQL) support EXPLAIN ANALYZE save it for your next debugging session. #SQL #PostgreSQL #MySQL #BackendDevelopment #DatabaseOptimization #SoftwareEngineering #Python #Django #FastAPI
To view or add a comment, sign in
-
Your ORM is lying to you about performance. Every abstraction layer adds cost. Sequelize and TypeORM generate queries you never wrote and often never inspected. When your API slows down, the ORM is usually the first suspect - but most developers never look past it. Switch to pg and run raw SQL. Then use EXPLAIN ANALYZE directly from Node.js to see exactly what Postgres is doing. Here is a quick example: const { rows } = await pool.query(` EXPLAIN ANALYZE SELECT u.id, u.name, COUNT(o.id) AS order_count FROM users u JOIN orders o ON o.user_id = u.id GROUP BY u.id `); rows.forEach(row => console.log(row['QUERY PLAN'])); This gives you real execution time, seq scans, index hits - everything your ORM hides from you. Practical takeaway - run EXPLAIN ANALYZE on your five most-called endpoints this week. You will likely find at least one full table scan that a single index can eliminate. Have you ever caught a serious performance issue that your ORM was silently causing? #NodeJS #PostgreSQL #WebDevelopment #BackendEngineering #DatabasePerformance #SQLOptimization
To view or add a comment, sign in
-
Learning SQL separately from Django — and it hits different when you see the raw logic behind the ORM. Today's practice: → INSERT INTO — 9 records into a Books table → SELECT * — full table query → SELECT Price FROM Books — targeted column fetch No framework. No abstraction. Just SQL talking directly to the database. Foundations matter. Building mine. #SQL #MySQL #BackendDevelopment #LearningInPublic
To view or add a comment, sign in
-
-
I just shipped something I'm really proud of. 🚀 Semicolon — an open-source SQL formatter that turns messy, unreadable queries into clean, structured code in seconds. You don’t need to decode a wall of SQL just to find where the JOIN stops and the WHERE starts. You don’t need to spend minutes formatting it perfectly. SemiColon handles it for you instantly. Just install it, point it at your SQL, and you’re done. → pip install semicolonfmt → semicolon query.sql (format a file) → semicolon . (format everything in a directory) What it does: ✅ Formats messy SQL into clean, consistent, scannable queries ✅ Works on single files or entire directories ✅ CI/CD check mode so unformatted SQL never slips into prod ✅ Pre-commit hook support ✅ Zero config. Just run it. It's open source, it's free, and it's just getting started. ⭐ If you like it, give it a star on GitHub 🔧 Test it, push it to the limits, and open a PR if you spot something off 🔗 https://lnkd.in/dmYG-t4c Clean SQL is not a nice-to-have. It's a craft. Let's treat it like one. 💪 #OpenSource #SQL #PostgreSQL #Python #DevTools #BuildingInPublic #CleanCode
To view or add a comment, sign in
-
"That's one small step for a man, one giant leap for my backend." 🚀 Today I migrated my Habitual API from SQLite to PostgreSQL. It worked locally - and immediately broke in CI. The technical blocker 🛠️ The heatmap endpoint generates a date range for the last 30 days using a recursive CTE. In SQLite this worked: func.date(func.now(), "-29 days") PostgreSQL doesn't support this modifier syntax. One line, two hours of debugging. I ended up moving the date generation to Python instead of SQL. Cleaner and more portable. The CI struggle 🏗️ Seven commits. Six failures. All named "Little update for CI" - at that point naming stopped mattering (see screenshot 👇). The root cause: my local environment had pinned versions that weren't reflected in requirements.txt. CI pulled newer packages - and everything fell apart. After a few iterations: ✅ 271 tests passed ✅ PostgreSQL running in CI ✅ migrations applied on every push ✅ clean pipeline Next step: Docker and CD. #python #fastapi #postgresql #githubactions #backend
To view or add a comment, sign in
-
-
Fundamental database tools to protect your business logic in Django: https://lnkd.in/g8Ru7-SA My first blog post at Lincoln Loop! I'm looking forward to sharing many more.
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development