🔍 Case Study: Why PostgreSQL Slowed Down Overnight Everything looked fine—until nightly operations began. Queries slowed, storage kept growing, and nothing seemed obviously broken. The root cause? A hidden mix of large data cleanups and backups that quietly created table “bloat”—extra space PostgreSQL couldn’t reuse efficiently. We tackled it with a combination of online reorganization and smarter data cleanup, restoring performance and freeing up disk space—without downtime. 👉 Check the comments for the full case study. Hossted – Beyond Support. #PostgreSQL #OpenSource #DatabasePerformance #CaseStudy #Hossted #DBA #DevOps #SRE #PerformanceTuning #CloudInfra #SQL #PostgresTips
PostgreSQL Performance Slowed Down Overnight
More Relevant Posts
-
PostgreSQL continues to close a long-standing operational gap with REPACK CONCURRENTLY — a major step forward for managing table bloat in always-on environments. Table bloat is a real challenge in high-write PostgreSQL systems. Until now, options included: • VACUUM — safe but limited • VACUUM FULL — blocking • pg_repack — external tool • pg_squeeze — extension-based concurrent rewrite Now, REPACK CONCURRENTLY brings this capability closer to PostgreSQL core. Interestingly, this work is heavily inspired by pg_squeeze — designed by the same author, using logical decoding and background workers to rewrite tables without blocking workloads. Why this matters: • Always-on maintenance • Reduced operational risk • Better performance predictability • Enterprise-grade PostgreSQL operations This is another step in PostgreSQL’s evolution — from a powerful database to a self-managing enterprise data platform. Small feature. Big operational impact. Commit: https://lnkd.in/eN_cFTpm #PostgreSQL #OpenSource #Database #Postgres #DBA #CloudNative #DataPlatform #DatabaseEngineering
To view or add a comment, sign in
-
-
The cool part, IMO, is the extension / tool to core-feature path. Adding major changes to core is challenging, so proving out the concept in an extension / tool first and providing this value in the ecosystem is a great path forward. A lot of folks who suggest new Postgres contributions are disappointed when they get “this sounds like an extension” as a response, but it’s actually good advice.
VP | Data & AI Platform Strategy | Building AI-Ready Data Infrastructure at Enterprise Scale | PostgreSQL, Agentic AI, Vector Databases & Modern Data Architectures | Published Author & Wharton CTO
PostgreSQL continues to close a long-standing operational gap with REPACK CONCURRENTLY — a major step forward for managing table bloat in always-on environments. Table bloat is a real challenge in high-write PostgreSQL systems. Until now, options included: • VACUUM — safe but limited • VACUUM FULL — blocking • pg_repack — external tool • pg_squeeze — extension-based concurrent rewrite Now, REPACK CONCURRENTLY brings this capability closer to PostgreSQL core. Interestingly, this work is heavily inspired by pg_squeeze — designed by the same author, using logical decoding and background workers to rewrite tables without blocking workloads. Why this matters: • Always-on maintenance • Reduced operational risk • Better performance predictability • Enterprise-grade PostgreSQL operations This is another step in PostgreSQL’s evolution — from a powerful database to a self-managing enterprise data platform. Small feature. Big operational impact. Commit: https://lnkd.in/eN_cFTpm #PostgreSQL #OpenSource #Database #Postgres #DBA #CloudNative #DataPlatform #DatabaseEngineering
To view or add a comment, sign in
-
-
What I find most valuable in comparisons like Postgres vs MySQL is that they push us to think beyond syntax and features. A database is not just where data lives. It is a core part of system architecture, performance, reliability, and scalability. The real question is not simply which database is more popular, but which design fits the needs, workload, and long-term goals of the system being built. #PostgreSQL #MySQL #Databases #SystemDesign #BackendDevelopment #SoftwareEngineering
To view or add a comment, sign in
-
-
Monitor your WAL generation rate before it becomes a problem. Every write in PostgreSQL goes through the Write-Ahead Log. Every byte of WAL must be written to disk, sent to replicas, and archived for backups. High WAL generation means high I/O, replication lag, and backup bloat. On PostgreSQL 14+: SELECT wal_records, wal_fpi, pg_size_pretty(wal_bytes) AS total_wal_generated, stats_reset FROM pg_stat_wal; `wal_fpi` (full page images) is the interesting one. After every checkpoint, the first modification to each page writes the entire 8 KB page to WAL — not just the change. This is a safety mechanism, but it means frequent checkpoints generate much more WAL. Check your checkpoint frequency: SELECT checkpoints_timed, checkpoints_req, pg_size_pretty(buffers_checkpoint * 8192::bigint) AS checkpoint_data FROM pg_stat_bgwriter; If `checkpoints_req` (forced checkpoints) is high relative to `checkpoints_timed` (scheduled checkpoints), your `max_wal_size` is too low. The default of 1 GB is often insufficient for production workloads. A good starting point: max_wal_size = 4GB min_wal_size = 1GB Check this weekly. Sudden spikes in WAL generation usually mean something changed — a bulk operation, a new feature with heavy writes, or a configuration regression. #PostgreSQL #Database #WAL #Performance #DevOps
To view or add a comment, sign in
-
I am m starting a deep dive series on PostgreSQL Internals Most developers use PostgreSQL every day. but very few understand what actually happens inside the database. So I am breaking it down step by step in a simple, practical way. Part 1: PostgreSQL Architecture Explained In this first post, I cover: Why PostgreSQL uses processes instead of threads What happens when you connect to a database How a SQL query is actually executed What shared memory means in simple terms Why PostgreSQL is so reliable in production systems Read full article here...
To view or add a comment, sign in
-
PostgreSQL is often treated as “just” a relational database. But the more interesting question is usually not SQL vs NoSQL. It is this: What consistency model and scaling model does the system actually need? By understanding the tradeoffs: 🎯 what ACID really gives you 🎯 what BASE really means in practice 🎯 why read replicas are often the first compromise 🎯 why sharding is not replication 🎯 how internet-scale systems changed database architecture 🎯 why many teams eventually still wanted SQL, transactions, and mature tooling 🎯 and why PostgreSQL became such an interesting hybrid answer If you work with PostgreSQL beyond basic CRUD, this presentation should give you a cleaner way to think about consistency, scaling, and architecture decisions. #PostgreSQL #DatabaseArchitecture #SQL #NoSQL #ACID #BASE #DistributedSystems #Backend
To view or add a comment, sign in
-
Coming up soon at #PostgresConf (both in-person and online)... 🍿 "Past, Present, and Future: Logical Decoding and Replication in PostgreSQL", presented by Hari Kiran, will cover topics like: 📖 the journey of logical decoding and replication in PostgreSQL, from its early adoption through extensions like pglogical, to the robust native features introduced in recent PostgreSQL releases ⚙️ innovations in the ecosystem, particularly the work of Multi-master replication, that are shaping the future of distributed PostgreSQL Learn more about the session coming up - watch his announcement, or check out the program ➡️ https://hubs.la/Q04b0xdd0 #postgresql #postgres #technews #data #sql #dataengineering #devops #opensource #techevent #postgresconf #postgresworld
To view or add a comment, sign in
-
How Databricks Lakebase Is Challenging Traditional Databases We all have used databases particularly the SQL (Postgres, MySQL and more) one which acts as a backbone for most of the applications that are live today. But something interesting is here. Lakebase from Databricks, and it’s not just another managed postgres database. It’s a shift of database architecture itself. Features that stood out: 1️⃣Decoupled Architecture (efficient read and write) 2️⃣Instant Branching (spin up secondary db in ms) 3️⃣Built-in Lakehouse Integration 4️⃣Cost-efficient scaling And if you have been using Postgres for a long time and want to understand how it works under the hood, This will help you grasp the deeper mechanism. I’ve spent time exploring how usual Postgres behaves, the architecture of Lakebase and shared my observation here. Read More: https://lnkd.in/gVwZAYpF #Lakebase #Database #Databricks #Postgres #DataEngineering
To view or add a comment, sign in
-
-
Most people think PostgreSQL performance issues are complex. In reality, many of them come down to one simple mistake 👇 👉 Not checking execution plans. Before trying to optimize any slow query, always run: EXPLAIN ANALYZE SELECT * FROM transactions WHERE user_id = 123; 💡 This tells you: - Whether indexes are being used - If a sequential scan is happening - Actual execution time vs expected I’ve seen cases where: A query taking 5+ seconds Was reduced to milliseconds Just by adding the right index after checking the plan. 📌 Lesson: Don’t guess. Don’t assume. Let PostgreSQL tell you what’s wrong. #PostgreSQL #SQL #DBA #Performance #DataEngineering
To view or add a comment, sign in
-
How VACUUM can impact queries (and create vacuum lag) PostgreSQL doesn’t delete rows immediately. It marks them as dead. VACUUM later cleans these dead rows and frees space. When VACUUM can’t keep up (due to high write load, long-running transactions, or misconfigured auto-vacuum), dead tuples accumulate. This leads to vacuum lag, slower queries, bloated tables, and inefficient index scans. VACUUM isn’t just maintenance. It’s a critical part of query performance and database health. Ignoring it doesn’t fail fast & it fails quietly. #PostgreSQL #DatabaseInternals #VACUUM #SystemDesign #BackendEngineering #PerformanceEngineering #SoftwareArchitecture #Databases
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Read the full case study here: https://bit.ly/4tidqJj