🚨 A subtle PostgreSQL timezone bug that can break your data While working with PostgreSQL, I ran into a common but tricky issue with timezone handling — and it’s easy to miss. 🔍 Key Insight ➡️ If your column is TIMESTAMP WITH TIME ZONE (timestamptz) → it’s already stored in UTC ➡️ If it’s TIMESTAMP WITHOUT TIME ZONE → you must explicitly treat it as UTC ⚠️ The Common Mistake Applying timezone conversion twice can shift your data incorrectly, leading to wrong results without obvious errors. ✅ Best Practice ➡️ Use single timezone conversion in most cases ➡️ Only apply double conversion if your data is stored as timestamp without timezone in UTC ➡️ Prefer range-based filtering over date casting for better accuracy and performance 🎯 Takeaway If you’re unsure about your column type, it’s most likely timestamptz — so keep it simple and avoid over-conversion. Small detail, big impact. #PostgreSQL #SQL #BackendDevelopment #Azure #SoftwareEngineering #Debugging
PostgreSQL Timestamptz Handling Best Practices
More Relevant Posts
-
🚀 Medium Article Alert: PostgreSQL Write Path Explained What really happens when you run an INSERT in PostgreSQL? You type: INSERT INTO users VALUES (...) → hit Enter → INSERT 0 1 ✅ Simple, right? But under the hood, PostgreSQL is doing some serious heavy lifting to ensure your data is durable, consistent, and crash-safe — even if the power goes out right after execution. From WAL (Write-Ahead Logging) to buffer management and checkpoints, this article breaks down the complete write path in a simple, intuitive way. 📖 If you're working with databases, distributed systems, or building data pipelines — this is a must-read. 👉 Read the full article here: https://lnkd.in/gqMN5fnQ #PostgreSQL #Databases #DataEngineering #SystemDesign #Backend #WAL #TechDeepDive
To view or add a comment, sign in
-
-
How VACUUM can impact queries (and create vacuum lag) PostgreSQL doesn’t delete rows immediately. It marks them as dead. VACUUM later cleans these dead rows and frees space. When VACUUM can’t keep up (due to high write load, long-running transactions, or misconfigured auto-vacuum), dead tuples accumulate. This leads to vacuum lag, slower queries, bloated tables, and inefficient index scans. VACUUM isn’t just maintenance. It’s a critical part of query performance and database health. Ignoring it doesn’t fail fast & it fails quietly. #PostgreSQL #DatabaseInternals #VACUUM #SystemDesign #BackendEngineering #PerformanceEngineering #SoftwareArchitecture #Databases
To view or add a comment, sign in
-
-
🚨 New Medium Article Alert! Ever wondered what actually happens behind the scenes when you run a simple SELECT query in PostgreSQL? We write queries every day… SELECT * FROM users WHERE id = 42; But under the hood, PostgreSQL is doing a LOT more than just fetching rows. In my latest article, I break it down in a simple, visual, and easy-to-understand way: 🔍 What happens after you hit enter ⚙️ How the query parser, planner, and executor work 📊 Why indexes matter 🚀 How PostgreSQL optimizes your queries for performance If you're working with databases, backend systems, or building data pipelines — this is something you must understand. 💡 Whether you're a beginner or an experienced engineer, this will change how you think about query performance. 👉 Read the full article here: https://lnkd.in/gRkF_gZk #PostgreSQL #Databases #BackendEngineering #SystemDesign #PerformanceOptimization #SQL #TechExplained
To view or add a comment, sign in
-
-
PostgreSQL continues to close a long-standing operational gap with REPACK CONCURRENTLY — a major step forward for managing table bloat in always-on environments. Table bloat is a real challenge in high-write PostgreSQL systems. Until now, options included: • VACUUM — safe but limited • VACUUM FULL — blocking • pg_repack — external tool • pg_squeeze — extension-based concurrent rewrite Now, REPACK CONCURRENTLY brings this capability closer to PostgreSQL core. Interestingly, this work is heavily inspired by pg_squeeze — designed by the same author, using logical decoding and background workers to rewrite tables without blocking workloads. Why this matters: • Always-on maintenance • Reduced operational risk • Better performance predictability • Enterprise-grade PostgreSQL operations This is another step in PostgreSQL’s evolution — from a powerful database to a self-managing enterprise data platform. Small feature. Big operational impact. Commit: https://lnkd.in/eN_cFTpm #PostgreSQL #OpenSource #Database #Postgres #DBA #CloudNative #DataPlatform #DatabaseEngineering
To view or add a comment, sign in
-
-
The cool part, IMO, is the extension / tool to core-feature path. Adding major changes to core is challenging, so proving out the concept in an extension / tool first and providing this value in the ecosystem is a great path forward. A lot of folks who suggest new Postgres contributions are disappointed when they get “this sounds like an extension” as a response, but it’s actually good advice.
VP | Data & AI Platform Strategy | Building AI-Ready Data Infrastructure at Enterprise Scale | PostgreSQL, Agentic AI, Vector Databases & Modern Data Architectures | Published Author & Wharton CTO
PostgreSQL continues to close a long-standing operational gap with REPACK CONCURRENTLY — a major step forward for managing table bloat in always-on environments. Table bloat is a real challenge in high-write PostgreSQL systems. Until now, options included: • VACUUM — safe but limited • VACUUM FULL — blocking • pg_repack — external tool • pg_squeeze — extension-based concurrent rewrite Now, REPACK CONCURRENTLY brings this capability closer to PostgreSQL core. Interestingly, this work is heavily inspired by pg_squeeze — designed by the same author, using logical decoding and background workers to rewrite tables without blocking workloads. Why this matters: • Always-on maintenance • Reduced operational risk • Better performance predictability • Enterprise-grade PostgreSQL operations This is another step in PostgreSQL’s evolution — from a powerful database to a self-managing enterprise data platform. Small feature. Big operational impact. Commit: https://lnkd.in/eN_cFTpm #PostgreSQL #OpenSource #Database #Postgres #DBA #CloudNative #DataPlatform #DatabaseEngineering
To view or add a comment, sign in
-
-
PostgreSQL is having a moment. And it deserves it. Here's why I think Postgres is quietly becoming the most dangerous database in the room: 1. pgvector — native vector search without spinning up a separate service. RAG pipelines in pure SQL. 2. Extensibility — it's not a database, it's a database platform. TimescaleDB, PostGIS — all built on top. 3. Cloud-native — Supabase, Neon, and Aurora are proof that Postgres scales to enterprise workloads. 4. Trust — 35+ years of reliability. No VC is pulling the plug on this one. The data stack has exploded with tools. But more often than not, the answer was Postgres all along. Are you sleeping on Postgres? Or already all-in.. You can express your views..😊 #PostgreSQL #DataEngineering #DataStack #pgvector
To view or add a comment, sign in
-
Why are your queries slow (even with good indexes) ? 🔎🤔 I see a lot of talk about indexing, but sometimes the bottleneck isn't the index—it's how the PostgreSQL query planner interprets your data statistics. If your statistics are stale, even a well-indexed query can result in a sub-optimal sequential scan. Quick tip for the week: Always verify that your autovacuum is actually keeping up with your write-heavy tables. If you ignore table bloat, your storage costs climb while your performance dips. 📉 It’s small maintenance steps like these that keep our systems lean and reliable. What’s the one Postgres configuration parameter you always check first? #PostgreSQL #DatabaseAdministration #TechTips #DBA #PerformanceOptimization
To view or add a comment, sign in
-
-
What if thousands of UPDATEs and DELETEs are hitting your table every second? Your data looks correct. But your table size keeps growing. 📈 This is called Table Bloat. And it's a direct cost of MVCC. Here's why it happens: → PostgreSQL never overwrites a row on UPDATE or DELETE → It always creates a new version of the row instead → The old version stays behind as a dead tuple → At thousands of writes per second, dead tuples pile up fast The result? → Tables grow even when actual data hasn't changed → Queries slow down scanning through dead tuples → Indexes keep pointing to rows that no longer exist So what's the solution? VACUUM. 🧹 What VACUUM does: → Scans the table for dead tuples → Removes them and marks space as reusable → Updates the visibility map so queries stay fast → Prevents transaction ID wraparound — ignore this and PostgreSQL will shut itself down 🚨 One thing VACUUM does NOT do: → It does not shrink the file size on disk → For that you need VACUUM FULL — but it locks the table, use carefully And the best part? You don't have to run it manually. PostgreSQL's autovacuum does this in the background automatically. But autovacuum isn't magic. On high-write tables it can fall behind — tuning it for your workload is where the real DBA work begins. MVCC gave PostgreSQL speed and clean isolation. VACUUM is what keeps that trade-off from breaking you. 💡 #PostgreSQL #Database #DBA #VACUUM #TableBloat #DataEngineering
To view or add a comment, sign in
-
Most teams believe that ClickHouse isn’t able to process their load. It is. But you do it incorrectly. A recent case study revealed a system that processes 𝗜𝗡𝗦𝗘𝗥𝗧𝘀 𝗮𝘁 𝗮 𝘀𝗽𝗲𝗲𝗱 𝗼𝗳 𝟯,𝟱𝟬𝟬/𝘀𝗲𝗰… and kills itself on that. The issue? 👉 Kafka was feeding a very high number of INSERTs (for each message). That doesn't go well with ClickHouse. After changing just 𝘁𝘄𝗼 𝘀𝗲𝘁𝘁𝗶𝗻𝗴𝘀: • batching data correctly (Kafka fetch settings) • removing insert split by partitioning The outcome? 👉 INSERT rate drops from ~𝟰𝟬𝟬/𝗺𝗶𝗻 -> 𝟭/𝗺𝗶𝗻 👉 Batch size increases from ~𝟮𝟬 𝗿𝗲𝗰𝗼𝗿𝗱𝘀 𝘁𝗼 𝟮𝟬,𝟬𝟬𝟬+ 👉 Throughput increases by 𝟱–𝟭𝟬× 👉 SELECT performance increases up to 𝟱𝘅 Same data. Same infrastructure. Total difference in system behavior. That is what all the teams are missing: Databases fail not because of scale. They fail because of improper usage. And no amount of monitoring would change that. Here at AG Data, we never optimize queries. We assume responsibility for your database and eliminate risks before they cost you millions. If your ClickHouse / PostgreSQL / MySQL: • slows down under load • is experiencing difficulties with inserts or replication • or you are considering a migration Let's talk. Because those issues won’t remain minor. #ClickHouse #BigData #PostgreSQL #Engineering #MySQL
To view or add a comment, sign in
-
-
The database ran fine until the checkpoint hit. A team reached out because their PostgreSQL queries would slow down at predictable intervals. Not random spikes. A pattern. Fast, then slow, then fast again. Like a heartbeat they could not explain. The culprit was something most teams never touch. Checkpoints. PostgreSQL uses checkpoints to ensure data consistency. After every checkpoint, the database writes the full content of each modified page to the write-ahead log. These are called Full-Page Image Writes, and they create massive I/O spikes immediately after every checkpoint cycle. Under a steady workload, you get a saw-tooth performance pattern. Queries are fast coming out of a checkpoint, then progressively degrade as the next one builds, then spike again when it fires. Here is what makes this tricky. Default checkpoint settings are designed to be safe and generic. They are not designed for production workloads. Most teams deploy PostgreSQL, confirm it works, and never revisit those settings. The fix is not complicated. Tuning checkpoint timing and spacing evenly distributes I/O load, eliminates the sawtooth pattern, and significantly reduces WAL overhead. The performance gains are immediate and measurable. Think of it like a water heater that cycles on and off. Every time it kicks on, it draws a surge of energy. A steady, modulated system uses less energy and delivers consistent output. Here is what our customers tell us. The performance problems they thought were hardware limitations were actually configuration defaults nobody questioned. Have you ever traced a recurring performance issue back to a setting you assumed was already optimized? #PostgreSQL #DatabasePerformance #QueryOptimization #DatabaseTuning #FortifiedData
To view or add a comment, sign in
-
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development