I thought switching databases was easy. Production disagreed. I recently migrated my backend from MySQL to PostgreSQL (Supabase + Render). On paper: switch the database, update the connection string, run migrations. In reality? Deployment failed instantly. Three times. Here's what actually broke 👇 🔌 Port confusion Direct connections (port 5432) weren't reachable from Render. Fix: switch to Supabase's connection pooler (port 6543). 🌐 IPv6 walls Supabase's direct endpoint resolved to IPv6. Render couldn't reach it. Silent "Network unreachable" errors — until I dug deep into DNS resolution. 🔑 Stale environment variables My app kept connecting to the old database even after I updated everything. A local .env file was silently overriding Render's environment variables. The whole time. ⚙️ Alembic picking the wrong URL Migrations failed because the database URL was being loaded from the wrong config layer at runtime. Traced it across three files before I found it. What this really taught me: → "Works locally" is meaningless in production → Always log your actual database URL at startup — not what you think you configured → Managed services have hidden constraints. Learn them before you deploy → Debugging deployment is a skill. Treat it like one. If you're planning a similar migration: ✅ Use the connection pooler ✅ Enforce SSL ✅ Verify what your app is actually connecting to at runtime ✅ Never trust your local .env in production Three failed deployments. A lot of logs. One stable production app. Worth it. 🚀 What's the most painful deployment bug you've ever chased? Drop it below 👇 #Python #PostgreSQL #BackendDevelopment #SoftwareEngineering #DevOps #BuildingInPublic
PostgreSQL migration woes: lessons learned from a painful deployment
More Relevant Posts
-
Built a production-grade backend from scratch — here's what I learned. TaskAlloc is an employee and task allocation REST API I built with FastAPI and PostgreSQL. Not a tutorial follow-along — I designed the architecture, made the decisions, and figured out why things break. What's under the hood: → 3-tier role system (Admin / Manager / Employee) with access enforced at the query layer — not just filtered in the response → JWT auth with refresh token rotation. Raw tokens never touch the database, only SHA-256 hashes are stored. If the DB leaks, the tokens are useless. → Task state machine — PENDING → IN_PROGRESS → UNDER_REVIEW → COMPLETED. Invalid transitions are rejected before any database write. → Middleware that auto-logs every mutating request with who did it, what resource they touched, and the HTTP status code → 67 passing tests against SQLite in-memory. No external database needed to run the suite. 35+ endpoints. Soft delete. UUID primary keys. Docker + Docker Compose. Full Swagger docs. The thing that surprised me most was how much I learned from just trying to do things the right way — not "make it work" but "make it work correctly." Things like why audit logs shouldn't have a foreign key to users, or why you write the activity log before the status update commits. GitHub in the comments. #FastAPI #Python #BackendDevelopment #PostgreSQL #SoftwareEngineering #BuildingInPublic #OpenToOpportunities #Development
To view or add a comment, sign in
-
Timeouts (The Small Setting That Saves Your System) --- Built:- A service calling multiple downstream APIs to fetch and aggregate data. --- Problem I faced:- Everything worked fine… until one dependency slowed down. Then suddenly: Requests started hanging Thread pool got exhausted API response time shot up Entire service became slow All because one service was taking too long. --- How I fixed it:- The issue was missing timeouts. Requests were waiting indefinitely. Fixes applied: Added strict timeouts for all external calls Used fallback responses where possible Combined with circuit breaker for failing services Monitored slow calls with proper logging Now: Slow services don’t block everything System fails fast instead of hanging Overall stability improved --- What I learned A slow dependency is sometimes worse than a failed one. At least failures are quick. Slow calls quietly kill your system. --- Question:- Do your API calls have proper timeouts… or are they waiting forever without you noticing? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
🚀 Just shipped my second backend project — a production-grade Task Management API! 🔗 Live: https://lnkd.in/g_MYFbxs 🐙 GitHub: https://lnkd.in/gppGbTyC ⚙️ What I built: → JWT authentication (signup + login) → Full CRUD on tasks → Role-based access control (user / admin) → Paginated task listing → Each user sees only their own data → Dockerized for local + production → Deployed on Render with Supabase PostgreSQL 🛠️ Tech Stack: FastAPI · PostgreSQL · SQLAlchemy · Pydantic v2 · JWT · Bcrypt · Docker · Render · Supabase This project taught me how real backend systems are structured — not just "make it work" but make it secure, scalable, and deployable. #FastAPI #Python #Backend #Docker #PostgreSQL #JWT #OpenToWork #BuildInPublic #100DaysOfCode
To view or add a comment, sign in
-
-
Building scalable systems is challenging, but testing those limits is the best way to learn. Over the past couple of weeks during my SIWES, I decided to dive deep into systems architecture, security, and high-concurrency environments. To bridge the gap between my current Python/Django expertise and my company’s C#/ASP.NET stack, I built a high-concurrency event ticketing backend -essentially a mini-Ticketmaster! 🎟️ Here is what I engineered: 🔹 Concurrency Safety: Solved the dreaded "double-booking" race condition using PostgreSQL pessimistic locking (select_for_update()). 🔹 Read-Heavy Optimization: Implemented Redis caching to shield the DB from traffic spikes, paired with strict cache invalidation to keep data accurate. 🔹 Asynchronous Processing: Decoupled slow processes using Celery & message brokers so the API stays lightning-fast while emails queue in the background. 🔹 API Defense: Built strict throttling/rate limiting to block scalper bots from spamming the purchase endpoints. 🔹 Containerization: Orchestrated the entire multi-server architecture with Docker and docker-compose. I also spent time deploying a CRUD blog app to Azure and building a new portfolio using HTML/CSS/Python. Next up: Taking these exact same architectural concepts --caching, locking, rate limiting, and containerization --and translating them into ASP.NET. #SoftwareEngineering #BackendDevelopment #Django #Redis #Docker #PostgreSQL #ASPNET #SIWES #TechJourney
To view or add a comment, sign in
-
-
Sunday ship: @perryts/postgres is out. A pure-TypeScript Postgres driver that speaks the wire protocol directly. No libpq. No native addons. No FFI. Why another one? → Every Node Postgres driver worth using wraps libpq or ships a platform-specific .node addon. That's a dead end for AOT - Perry compiles TypeScript to a statically-linked native binary via LLVM, and there's no V8 at execution time to host a C addon into. → The Perry-native Postgres GUI this drives (Tusk) needed things most drivers throw away for ergonomics: exact numeric (not float), full column metadata (attnum, tableOid, typmod), structured errors with every documented ErrorResponse field, and raw row bytes on demand. So: one TypeScript source, three runtime targets. → Node.js 22+ → Bun 1.3+ → Perry AOT → 4.6 MB static binary, 1.8 MB RSS, single-digit-ms cold start Honest performance story: V8's JIT beats Perry-native on per-query wall time in a warm long-running process. Perry wins everywhere else — cold start, memory footprint, deploy size, and the platforms Node and Bun can't reach at all (CLIs, serverless cold paths, mobile, embedded Linux). What's in the box: SCRAM-SHA-256 / MD5 / cleartext auth, TLS with mid-stream upgrade, simple + extended query, 20 type codecs, exact numeric via a Decimal wrapper, structured PgError, cancel protocol, LISTEN/NOTIFY, connection pool, transactions, libpq URLs, PG* env vars. MIT. Feedback welcome. https://lnkd.in/dyeDTJG7
To view or add a comment, sign in
-
A developer just benchmarked file storage vs. SQLite and the results should make you question every default you've ever set. For 1M records, a Rust in-memory map hit ~169k requests per second. Go hit ~98k. Bun hit ~105k. SQLite? ~25k. A 6x read performance gap. The benchmark is up on HN with 272 points and 287 comments and the thread is ablaze. Here's the argument that matters. Every database is just files and a process in front of those files. SQLite is a single file with a process on top. PostgreSQL is a directory of files with a process in front. Your code reads and writes files just like databases do. The question is whether you need the process layer or whether flat files with an in-memory index would do the job. The benchmark does not lie. For reads, flat files with an in-memory map crush SQLite. If you're building an early-stage app and your primary operation is reading data, you might be paying for infrastructure you don't need. But here is the catch that the hype misses. The simplicity only holds if you run single-process. The moment you add concurrent writes from multiple workers, which is how most real apps work, flat files create architectural complexity that kills the simplicity argument. Multiple processes reading and writing the same files without a process layer managing consistency? That's a problem you will solve with bugs and race conditions. For agency operators and solo developers: the answer is probably SQLite plus in-memory cache. You get database reliability and consistency guarantees, with read performance that rivals any custom file-based solution. You don't have to choose between simplicity and correctness. The practical takeaway: before you default to PostgreSQL for your next side project, ask what you actually need. Most apps are smaller than developers assume. The default database is not a law of physics. It is a convention. And conventions are meant to be questioned when the evidence says otherwise. What are you defaulting to that you might not need? #SQLite #Database #PostgreSQL #Backend #DeveloperTools #StartupLife #AgencyLife #SmallBusiness #SoftwareEngineering #TechStrategy #BuildInPublic #WebDev #Programming #Coding #Architecture #AppDevelopment #MVP #EarlyStage #Engineering #Performance
To view or add a comment, sign in
-
-
Thundering Herd Problem (When Everything Breaks at Once):- A caching layer to reduce database load for frequently accessed data. --- Problem I faced: Everything worked well… until cache expired. Suddenly: Huge spike in database queries CPU usage shot up API latency increased System became unstable All at the same moment. --- How I fixed it:- This was the Thundering Herd Problem. When cache expired, multiple requests tried to fetch fresh data simultaneously. Fixes applied: Added cache locking (single-flight) so only one request refreshes data Introduced randomized cache expiry (TTL jitter) to avoid simultaneous expiration Used stale-while-revalidate approach for smoother refresh Now: Only one request hits DB Others wait or get cached response System stays stable. --- What I learned:-- Caching reduces load… but poorly managed caching can create bigger spikes than no cache at all. --- Question? Have you ever seen your system fail not because of traffic… but because many requests did the same thing at the same time? #Java #SpringBoot #Programming #SoftwareDevelopment #Cloud #AI #Coding #Learning #Tech #Technology #WebDevelopment #Microservices #API #Database #SpringFramework #Hibernate #MySQL #BackendDevelopment #CareerGrowth #ProfessionalDevelopment #RDBMS #PostgreSQL #backend
To view or add a comment, sign in
-
Most of my backend learning while building this API monitoring tool didn’t come from Go — it came from PostgreSQL. Another update from building in public. While working on latency tracking and incident systems, I ended up using some surprisingly powerful SQL features. DATE_TRUNC helped me group timestamps by hour so graphs don’t turn into noise. FILTER made it easy to count only successful checks without extra queries. COALESCE handled missing data cleanly, and BOOL_OR helped detect if any incident is still ongoing. Also used make_interval to avoid string hacks, EXTRACT(EPOCH) for duration in seconds, and proper indexing to keep queries fast. Didn’t expect SQL to carry this much weight in a backend system. follow along — more coming this week #golang #buildinpublic #backend #postgresql
To view or add a comment, sign in
-
-
𝗛𝗼𝘄 𝗧𝗼 𝗖𝗮𝗰𝗵𝗲 𝗗𝗿𝗶𝘇𝘇𝗹𝗲 𝗢𝗥𝗠 𝗤𝘂𝗲𝗿𝗶𝗲𝘀 𝗪𝗶𝗍𝗵 𝗥𝗲𝗱𝗶𝘀 𝗜𝗻 𝗡𝗲𝘅𝘁.𝗷𝘀 𝟭𝟲 You use Drizzle ORM for type-safe SQL. But it does not come with a query-level cache. This means every db.select() call hits your PostgreSQL database. To fix this, you can use Redis to cache your queries. Here's how: - Create a Redis client singleton to reuse connections. - Make a generic helper to handle JSON serialization and cache misses. - Replace db.query.* calls with your cached() function. - Delete the matching cache key after any write to prevent stale data. You need: - Node.js 22+ - TypeScript 5 strict - Next.js 16 project with App Router and Server Actions - Drizzle ORM with drizzle-orm/pg-core - Redis 7 instance - ioredis package Steps to cache queries: - Create a Redis client - Make a cache helper - Use the cache helper in your queries - Invalidate the cache after writes Benefits: - Reduce database connections - Speed up page loads - Prevent stale data Source: https://lnkd.in/d8ypSt2r
To view or add a comment, sign in
-
𝗠𝗮𝘀𝘁𝗲𝗿𝗶𝗻𝗴 𝗠𝘆𝗦𝗤𝗟 𝗖𝗼𝗻𝗻𝗲𝗰𝘁𝗶𝗼𝗻𝘀 𝗶𝗻 𝗚𝗼 🚀 Efficiency is everything in backend development. If you're using sqlx for your Go projects, here is a clean way to handle database connections like a pro. 𝗪𝗵𝘆 𝘁𝗵𝗶𝘀 𝗮𝗽𝗽𝗿𝗼𝗮𝗰𝗵 𝘄𝗼𝗿𝗸𝘀: 🔹 𝗖𝗹𝗲𝗮𝗻 𝗖𝗼𝗻𝗳𝗶𝗴: Uses a dedicated struct to build the connection string safely. 🔹 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻: Uses .Ping() immediately to ensure the database is actually alive. 🔹 𝗦𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Implements connection pooling (SetMaxOpenConns, SetMaxIdleConns) to prevent crashes under high traffic. Don't just connect—connect with performance in mind! 🚀 #Golang #Backend #Coding #MySQL #SoftwareEngineering #GoLangTips #CleanCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Insight Msughter Apera. Thanks for sharing. I have this question. Was there data in the previous database (MY SQL). If yes, I want to know if there were data loss or data corruption during the migration process and if no, how were you also able to handle that.