PostgreSQL micro-blog: Check your Cache Hit Ratio Ever wondered what your cache hit ratio is for your PostgreSQL database? Run this quick check: SELECT datname, blks_hit, blks_read, ROUND(100.0 * blks_hit / NULLIF(blks_hit + blks_read, 0), 2) AS hit_ratio FROM pg_stat_database WHERE datname = 'bluebox'; As a general rule, this value should be greater than 99% for well-performing systems. If it’s significantly lower, it may indicate: • insufficient shared buffers • queries scanning large portions of tables • missing or inefficient indexes • workload larger than available memory Memory is orders of magnitude faster than disk, keeping frequently accessed data in cache is critical for performance. #PostgreSQL #DatabasePerformance #DBA #PerformanceTuning #DatabaseSpa
Optimize PostgreSQL Performance with Cache Hit Ratio Check
More Relevant Posts
-
Your PostgreSQL database is probably 10x slower than it needs to be. Not because PostgreSQL is slow — because of 3 query patterns almost every team writes: 1. SELECT * when you need 3 columns 2. Missing indexes on foreign keys (yes, PostgreSQL doesn't auto-create them) 3. N+1 queries hiding behind your ORM The fix for each takes under 15 minutes. The performance gain is usually 5-20x. We put together the 80/20 guide — the 20% of tuning knowledge that solves 80% of real-world performance issues for small teams. No theory. Just patterns you can apply today: https://lnkd.in/eXYSMk9v #PostgreSQL #DatabasePerformance #BackendEngineering #DevTips
To view or add a comment, sign in
-
Not all multi-model databases mean the same thing. This post compares PostgreSQL + extensions, SurrealDB, ArcadeDB, Couchbase, ArangoDB, and OrientDB while drawing a stricter line around what counts as a true multi-model database. https://lnkd.in/geD8cbSd
To view or add a comment, sign in
-
Writing solid database logic in PostgreSQL doesn't have to be a headache. Foundation guide on PL/pgSQL from DatabaseGyaan. It cleanly breaks down essentials like block structures, dollar quoting, and function volatility. If you're building robust backend functions or migrating databases, mastering these core basics is an absolute must! 🔗 Full module & more: https://lnkd.in/dNZPHmej #PostgreSQL #Database #PLpgSQL
To view or add a comment, sign in
-
💡 PostgreSQL Insight from Production: Checkpoints & Hidden I/O Spikes One of the most underrated causes of performance issues in PostgreSQL is checkpoint behavior. In one of the production environments, we observed sudden query slowdowns and disk I/O spikes. On deeper analysis, the root cause was frequent and aggressive checkpoints forcing large amounts of dirty data to flush to disk at once. 🔍 What helped: • Increasing max_wal_size to reduce checkpoint frequency • Tuning checkpoint_completion_target (0.8–0.9) to spread writes more evenly • Adjusting checkpoint_timeout to avoid unnecessary checkpoints 📈 Result: Smoother I/O, reduced latency spikes, and improved overall database stability. Small parameter tuning- but a big impact in production. This is something you don’t usually notice until you work on real systems under load. #PostgreSQL #PostgresDBA #PerformanceTuning #DatabaseAdministration #HighAvailability #DBA #TechLearning #ProductionSupport
To view or add a comment, sign in
-
I handled a PostgreSQL production issue where the primary server storage became full due to an inactive replication slot. In PostgreSQL, replication slots retain WAL files to ensure replicas don’t miss data. However, if a replica goes offline and the slot remains, WAL files continue accumulating in the pg_wal directory, eventually exhausting disk space. I identified the issue using: SELECT slot_name, active, restart_lsn FROM pg_replication_slots; The slot was inactive but still retaining WAL. After confirming the replica was no longer needed, I safely removed it: SELECT pg_drop_replication_slot('slot_name'); PostgreSQL automatically cleaned up the old WAL files, and disk space was immediately recovered. Key lesson: Always monitor replication slots. An inactive slot can silently fill your disk and impact production stability. #PostgreSQL #DBA #Replication #WAL #ProductionSupport
To view or add a comment, sign in
-
-
PostgreSQL WAL (Write-Ahead Log) is essentially the database's diary. It keeps a record of everything the database is about to do. It’s the reason why your data is safe even if the server crashes unexpectedly. I put together this quick breakdown to explain how WAL works, how it cleans up after itself and why it sometimes causes disk space issues. Check it out. #PostgreSQL #Database #Postgres #PgSQL #TechTips #DBA
To view or add a comment, sign in
-
500 million rows. Queries crawling. What's your move? Most engineers reach for a new database. The better answer is often already in PostgreSQL. I wrote a practical guide to table partitioning on HackerNoon — range, list, hash, pruning, and what to avoid in production. https://lnkd.in/d-gV_hUz #PostgreSQL #BackendEngineering #Databases
To view or add a comment, sign in
-
Choosing between MySQL and PostgreSQL isn’t a feature comparison. It’s a workload decision. We’ve seen this in production: → Systems scale… then break under concurrency → Reads work fine… until analytics kicks in → Replication holds… until failure hits The pattern is simple: MySQL works for predictable workloads PostgreSQL takes over when systems get complex The real difference? 👉 Not syntax 👉 Not popularity 👉 But behavior under pressure If you're designing or scaling, this decision matters more than you think. #PostgreSQL #MySQL #DatabaseArchitecture #SystemDesign #DBA #TechLeadership
To view or add a comment, sign in
-
Most people think PostgreSQL performance issues are complex. In reality, many of them come down to one simple mistake 👇 👉 Not checking execution plans. Before trying to optimize any slow query, always run: EXPLAIN ANALYZE SELECT * FROM transactions WHERE user_id = 123; 💡 This tells you: - Whether indexes are being used - If a sequential scan is happening - Actual execution time vs expected I’ve seen cases where: A query taking 5+ seconds Was reduced to milliseconds Just by adding the right index after checking the plan. 📌 Lesson: Don’t guess. Don’t assume. Let PostgreSQL tell you what’s wrong. #PostgreSQL #SQL #DBA #Performance #DataEngineering
To view or add a comment, sign in
-
PostgreSQL continues to close a long-standing operational gap with REPACK CONCURRENTLY — a major step forward for managing table bloat in always-on environments. Table bloat is a real challenge in high-write PostgreSQL systems. Until now, options included: • VACUUM — safe but limited • VACUUM FULL — blocking • pg_repack — external tool • pg_squeeze — extension-based concurrent rewrite Now, REPACK CONCURRENTLY brings this capability closer to PostgreSQL core. Interestingly, this work is heavily inspired by pg_squeeze — designed by the same author, using logical decoding and background workers to rewrite tables without blocking workloads. Why this matters: • Always-on maintenance • Reduced operational risk • Better performance predictability • Enterprise-grade PostgreSQL operations This is another step in PostgreSQL’s evolution — from a powerful database to a self-managing enterprise data platform. Small feature. Big operational impact. Commit: https://lnkd.in/eN_cFTpm #PostgreSQL #OpenSource #Database #Postgres #DBA #CloudNative #DataPlatform #DatabaseEngineering
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development