5 performance settings that actually move the needle. Most database tuning conversations get lost in dozens of knobs. The reality is that a small handful of settings drive the majority of real-world performance outcomes. Wayne Leutwyler at Percona made this point clearly in February, and what we see in production matches it. 1. Buffer pool size. If your working set does not fit in memory, no amount of query tuning will save you. For MySQL, that is innodb_buffer_pool_size. For PostgreSQL, shared_buffers. Get this wrong, and everything else is noise. 2. Redo log size. Too small, and the database checkpoints constantly under write pressure. Too large, and recovery times grow. Sized properly, write-heavy workloads get smoother and steadier. 3. Connection limits and pooling. Raising max_connections to 5,000 does not make the database run faster. It gives you a slower one. EDB benchmarks on enterprise hardware showed peak performance between 300 and 500 concurrent connections, with sharp degradation past 700. A pooler like PgBouncer can serve thousands of clients from 30 to 50 real backend connections. 4. Flush method. How the database writes to disk and whether it bypasses the OS cache changes I/O behavior more than most teams realize. This is one to test, not assume. 5. Thread and cache sizing. Small numbers that quietly tax every connection. Wrong values turn into latency spikes nobody can explain. At Fortified Data, this is the work that turns a database from a slow tax line into a stable foundation. The wins are rarely glamorous. They are quietly responsible for keeping the business running. What is one tuning change that delivered a bigger performance gain than your team expected? #DatabasePerformance #MySQL #PostgreSQL #DBA #FortifiedData
5 key database performance settings for real-world results
More Relevant Posts
-
What if thousands of UPDATEs and DELETEs are hitting your table every second? Your data looks correct. But your table size keeps growing. 📈 This is called Table Bloat. And it's a direct cost of MVCC. Here's why it happens: → PostgreSQL never overwrites a row on UPDATE or DELETE → It always creates a new version of the row instead → The old version stays behind as a dead tuple → At thousands of writes per second, dead tuples pile up fast The result? → Tables grow even when actual data hasn't changed → Queries slow down scanning through dead tuples → Indexes keep pointing to rows that no longer exist So what's the solution? VACUUM. 🧹 What VACUUM does: → Scans the table for dead tuples → Removes them and marks space as reusable → Updates the visibility map so queries stay fast → Prevents transaction ID wraparound — ignore this and PostgreSQL will shut itself down 🚨 One thing VACUUM does NOT do: → It does not shrink the file size on disk → For that you need VACUUM FULL — but it locks the table, use carefully And the best part? You don't have to run it manually. PostgreSQL's autovacuum does this in the background automatically. But autovacuum isn't magic. On high-write tables it can fall behind — tuning it for your workload is where the real DBA work begins. MVCC gave PostgreSQL speed and clean isolation. VACUUM is what keeps that trade-off from breaking you. 💡 #PostgreSQL #Database #DBA #VACUUM #TableBloat #DataEngineering
To view or add a comment, sign in
-
The database ran fine until the checkpoint hit. A team reached out because their PostgreSQL queries would slow down at predictable intervals. Not random spikes. A pattern. Fast, then slow, then fast again. Like a heartbeat they could not explain. The culprit was something most teams never touch. Checkpoints. PostgreSQL uses checkpoints to ensure data consistency. After every checkpoint, the database writes the full content of each modified page to the write-ahead log. These are called Full-Page Image Writes, and they create massive I/O spikes immediately after every checkpoint cycle. Under a steady workload, you get a saw-tooth performance pattern. Queries are fast coming out of a checkpoint, then progressively degrade as the next one builds, then spike again when it fires. Here is what makes this tricky. Default checkpoint settings are designed to be safe and generic. They are not designed for production workloads. Most teams deploy PostgreSQL, confirm it works, and never revisit those settings. The fix is not complicated. Tuning checkpoint timing and spacing evenly distributes I/O load, eliminates the sawtooth pattern, and significantly reduces WAL overhead. The performance gains are immediate and measurable. Think of it like a water heater that cycles on and off. Every time it kicks on, it draws a surge of energy. A steady, modulated system uses less energy and delivers consistent output. Here is what our customers tell us. The performance problems they thought were hardware limitations were actually configuration defaults nobody questioned. Have you ever traced a recurring performance issue back to a setting you assumed was already optimized? #PostgreSQL #DatabasePerformance #QueryOptimization #DatabaseTuning #FortifiedData
To view or add a comment, sign in
-
-
𝗢𝗻𝗲 𝗶𝗻𝗱𝗲𝘅 𝗰𝗵𝗮𝗻𝗴𝗲 𝗰𝗮𝗻 𝗰𝗼𝘀𝘁 𝟭.𝟱 𝗵𝗼𝘂𝗿𝘀 𝗼𝗳 𝗱𝗼𝘄𝗻𝘁𝗶𝗺𝗲 It wasn't due to a poor-performing database. It wasn't high traffic. 👉 𝗜𝘁 𝘄𝗮𝘀 𝗮𝗻 𝗲𝘅𝗶𝘀𝘁𝗶𝗻𝗴 𝗶𝗻𝗱𝗲𝘅 𝘁𝗵𝗮𝘁 𝘄𝗮𝘀 𝗻𝗼𝘁 𝗯𝗲𝗶𝗻𝗴 𝘂𝘀𝗲𝗱 Details: • Soft deletes were added • A partial index was added • An old index was deleted Everything seemed fine. But: 👉 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗾𝘂𝗲𝗿𝘆 𝗽𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿𝘀 𝗱𝗶𝗱 𝗻𝗼𝘁 𝗺𝗮𝘁𝗰𝗵 𝘁𝗵𝗲 𝗶𝗻𝗱𝗲𝘅 As a result: • Query indexes became obsolete • Full table scans began • Connections skyrocketed • The system fell into ruins 👉 𝗦𝗮𝗺𝗲 𝗱𝗮𝘁𝗮𝗯𝗮𝘀𝗲. 𝗦𝗮𝗺𝗲 𝗱𝗮𝘁𝗮. 𝗦𝗮𝗺𝗲 𝗾𝘂𝗲𝗿𝗶𝗲𝘀. But performance suffered immensely. This is just one of the biggest underappreciated dangers in production: 👉 𝗗𝗶𝘀𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻 𝗯𝗲𝘁𝘄𝗲𝗲𝗻 𝗗𝗕 𝗰𝗵𝗮𝗻𝗴𝗲𝘀 𝗮𝗻𝗱 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗹𝗼𝗴𝗶𝗰 The terrifying thing about this is: 👉 𝗬𝗼𝘂 𝘄𝗼𝗻'𝘁 𝗿𝗲𝗮𝗹𝗶𝘇𝗲 𝗶𝘁 𝘂𝗻𝘁𝗶𝗹 𝗶𝘁'𝘀 𝘁𝗼𝗼 𝗹𝗮𝘁𝗲 At scale: • minutes of delay turn into seconds • turn into downtime This is something we encounter on a regular basis: 👉 𝗜𝗻𝗱𝗲𝘅𝗲𝘀 𝗲𝘅𝗶𝘀𝘁 𝗯𝘂𝘁 𝗱𝗼𝗻'𝘁 𝗽𝗲𝗿𝗳𝗼𝗿𝗺 𝘄𝗲𝗹𝗹 👉 𝗤𝘂𝗲𝗿𝗶𝗲𝘀 𝘄𝗼𝗿𝗸 𝗯𝘂𝘁 𝗳𝗮𝗶𝗹 𝗮𝘁 𝘀𝗰𝗮𝗹𝗶𝗻𝗴 This can't be caught by monitoring. It will not be solved by simply scaling your system. 👉 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗼𝘄𝗻𝗲𝗿𝘀𝗵𝗶𝗽 𝗶𝘀 𝗰𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗵𝗲𝗿𝗲 At AG Data, we ensure that: 👉 database changes, code changes and queries are synchronized 👉 before production rollout #databases #postgresql #mysql #databaseperformance #systemdesign #highload #scalability #devops
To view or add a comment, sign in
-
-
Ever used 𝙋𝙤𝙨𝙩𝙜𝙧𝙚𝙎𝙌𝙇 and just assumed it's... one thing? Install it, connect, write queries — done. That's what I thought too!! This week, while working on database migration service at Yugabyte, I pulled on a thread that unraveled something I'd taken for granted for years. PostgreSQL isn't a single program — it's a whole client-server system hiding in plain sight. • There's a server process that holds your data. • There are standalone client tools (psql, pg_dump, pg_restore) that are entirely separate binaries — they don't even need the server installed on the same machine. • And here's the part that really got me: PostgreSQL uses its own tables to keep track of all your tables. It's turtles all the way down. Once you see this clearly, questions you never thought to ask suddenly have fascinating answers: ➡️Why can pg_dump export a database running on a server across the internet? ➡️ Why do migration tools need client tools at specific versions but never need the PostgreSQL server? ➡️ pg_dump and psql speak the exact same protocol to the same server — so what actually makes them different? and many more..... I wrote a detailed blog walking through all of this — https://lnkd.in/gca4jwDY If you've only ever touched PostgreSQL through an ORM or a connection string, this might change how you think about what's actually running behind your queries.
To view or add a comment, sign in
-
-
This post is for developers that have an intuitive knowledge of what database indexes are, but don’t necessarily know how they work internally, what are the tradeoffs associated with indexes, what are the types of indexes provided by postgres and how you can use some of its more advanced options to make them more optimized for your use case. Indexes are special database objects primarily designed to increase the speed of data access, by allowing the database to read less data from the disk. They can also be used to enforce constraints like primary keys, unique keys and exclusion. https://lnkd.in/ed-bJyfS --- More like this—subscribe 👉 https://faun.dev/join
To view or add a comment, sign in
-
Last week I wrote about how more teams are talking about moving from SQL Server to PostgreSQL. Since then, I’ve had a bunch of messages and sales calls basically saying the same thing: “Licensing is killing us. Should we just switch to Postgres?” I get it. SQL Server licensing is expensive and Postgres is “free.” But “free” isn’t free. I cannot say this often enough! It just moves the cost into places people don’t budget for, and it’s why cross-platform database migrations go sideways more often than expected. Here’s what gets underestimated (almost every time): 1) 𝐂𝐨𝐦𝐩𝐚𝐭𝐢𝐛𝐢𝐥𝐢𝐭𝐲 𝐝𝐞𝐛𝐭 Data types don’t map cleanly. Collation/encoding surprises. Edge cases that only show up under real workloads. Stored procedures rewritten. T-SQL doesn’t translate 1:1 to Postgres syntax or procedural logic. 2) 𝐀𝐩𝐩𝐥𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐝𝐞𝐩𝐞𝐧𝐝𝐞𝐧𝐜𝐢𝐞𝐬 Connection strings are the easy part. Query behavior differences, transaction semantics, error handling, and driver quirks show up later... usually in production. 3) 𝐓𝐞𝐚𝐦 𝐞𝐱𝐩𝐞𝐫𝐭𝐢𝐬𝐞 Your team knows SQL Server under pressure. Postgres is a different product with different tooling, monitoring, and failure modes. The learning curve is real. 4) 𝐓𝐨𝐭𝐚𝐥 𝐜𝐨𝐬𝐭 𝐨𝐟 𝐨𝐰𝐧𝐞𝐫𝐬𝐡𝐢𝐩 Yes, the license is free. But you still pay for expertise, retraining, rebuilding monitoring, regression testing, and closing feature gaps with tooling or services. I’m not anti-Postgres. It’s a great database. I’m anti “we’ll save money” migrations that turn into multi-quarter projects and don’t come out clearly ahead on total cost for years. If licensing is the driver, there’s often a better path before you change platforms: • Right-size your SQL Server estate • Move Enterprise → Standard where you qualify • Reduce core count through performance tuning • Consolidate instances where it makes sense Sometimes the cheapest migration is the one you don’t do.
To view or add a comment, sign in
-
⚡ Indexing Strategy I Used After SQL Server → PostgreSQL Migration After migration, one major improvement area I worked on was index optimization in PostgreSQL. Because even a perfectly migrated database can perform poorly without the right indexing strategy. 🔴 Problem Post-migration observations: Some queries became slower Existing indexes from SQL Server were not fully effective Different query planner behavior in PostgreSQL ⚡ Approach I Followed ✔ Analyzed frequently used queries Focused on WHERE, JOIN, ORDER BY columns ✔ Validated existing indexes Removed unused / redundant indexes ✔ Created optimized indexes Single-column indexes for filters Multi-column indexes for joins ✔ Used PostgreSQL-specific features Partial indexes (for filtered data) Index-only scans (where possible) ✔ Balanced indexing Avoided over-indexing (write performance impact) 🔄 Index Optimization Flow Identify Slow Queries ↓ Analyze Execution Plan ↓ Check Existing Indexes ↓ Create / Modify Index ↓ Test Performance ↓ Deploy 📊 Result ✅ Faster query execution ✅ Reduced load on database ✅ Improved overall performance 💡 Key Learning: Indexes should be designed based on query patterns, not blindly migrated from source systems. #PostgreSQL #Indexing #PerformanceTuning #SQLServer #DataEngineering #SQL
To view or add a comment, sign in
-
-
Postgres turned 30 last year and It seems Postgres is having a moment. And I don't think its because Postgres got better. I believe the biggest tailwind is everything else got more complicated. At some point, teams started counting the real cost of specialized databases: -sync jobs -operational overhead -seven things that can break Then realized Postgres was already doing most of it just fine. Plus with the ecosystem built around it, it's now turning into the obvious answer again. https://lnkd.in/gHbakyPh
To view or add a comment, sign in
-
Came across a really insightful Medium post on handling locked records in SQL and using techniques like SKIP LOCKED to manage concurrency more efficiently. It’s a great read if you’re working with databases and want to better understand how to avoid blocking issues and improve performance in high-traffic systems. https://lnkd.in/ezjZaypp #SQL #BackendDevelopment #Database #TechRead
Master Concurrent Queues with SKIP LOCKED: Boost Your System's Performance gautam-shubham.medium.com To view or add a comment, sign in
-
𝗠𝘆𝗦𝗤𝗟 𝗱𝗼𝗲𝘀 𝗡𝗢𝗧 𝘀𝘂𝗽𝗽𝗼𝗿𝘁 𝗱𝗼𝘄𝗻𝗴𝗿𝗮𝗱𝗶𝗻𝗴 𝗳𝗿𝗼𝗺 𝟴.𝟰 𝗯𝗮𝗰𝗸 𝘁𝗼 𝟴.𝟬 𝘂𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗱𝗮𝘁𝗮 𝗱𝗶𝗿𝗲𝗰𝘁𝗼𝗿𝘆. Let that sink in. If your upgrade goes wrong and you didn't plan your rollback BEFORE starting, you'll have to restore from backup. On a 10TB database, that's 10-20+ hours of downtime. Part 4 of our MySQL 8.0 → 8.4 LTS Upgrade Guide is out: Rollback Strategy & Post-Upgrade Validation. Here's what most teams get wrong: ❌ They assume rollback = downgrade ❌ They try reverse replication (8.4 → 8.0) — not officially supported, risks silent data corruption ❌ They skip post-upgrade ANALYZE TABLE — and wonder why queries are suddenly slow Here's what actually works: ✅ Replication-based rollback — redirect traffic back to 8.0, zero data loss, instant ✅ Logical backup restore — your safety net for in-place upgrades ✅ 30-minute post-upgrade validation checklist — version check, replication health, auth, optimizer stats One thing I always stress: any writes on MySQL 8.4 after cutover are LOST if you roll back to 8.0. Validate fast. Rollback faster if needed. 👇 Read Part 4 here: https://lnkd.in/exJ6-mVY #MySQL #DBA #Database #MySQLUpgrade #DatabaseReliability #ReliaDB
To view or add a comment, sign in
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development