A question for every dev who's ever designed a database table: Did you design it for 10,000 rows or 10 million? Because when schemas are designed for the demo. For the MVP. For "let's just ship it and optimize later." And "later" arrives as a 3 AM P1 page, 18 months down the road, when the table that "works fine" has grown 1000x and suddenly nothing works fine. We call these the haunting patterns. Schema decisions that feel harmless at small scale and become structural nightmares at large scale. Data Drop #6 covers the big three: → UUIDs as primary keys — Random values fragment your B-tree indexes. At 500M rows, your index is bloated, your writes scatter across random pages, and your cache hit ratio craters. Sequential IDs exist for a reason. → "Just make it nullable" — The path of least resistance at design time. The source of a thousand bugs at query time. NULL doesn't equal NULL. Your aggregations silently skip rows. Your joins produce unexpected results. Nullable should be a conscious choice, not a default. → The EAV trap — Entity-Attribute-Value: the schema pattern that says "I don't want to commit to a data model." Congratulations, you now have a key-value store with the performance of a relational database and the flexibility of neither. Design for the table size you're going to have. Not the one you have today. Data Drop #6. Day 6 of 23. #AprilDataDrops #PostgreSQL #DataDrop6 #SchemaDesign #Database #Performance #OpenSourceDB OpenSource DB | Lahari Giddi

To view or add a comment, sign in

Explore content categories