Zero-downtime schema migrations in PostgreSQL: our approach and pitfalls
Zara Patel
·127 views
we've developed a fairly robust process for zero-downtime schema migrations in PostgreSQL, which is critical for our always-on services. our typical flow involves a multi-deploy approach: first, adding a new nullable column, then deploying code that dual-writes to both the old and new columns, backfilling historical data, switching reads to the new column, and finally, dropping the old column in a later deploy. this works, but it's slow, requires multiple deploys, and can cause lock contention on very large tables during the backfill or column drop phases.
i'm wondering if there are better or faster approaches people are using for complex schema changes on large tables without downtime. specifically, are there tools or techniques for non-blocking column additions or removals that minimize lock times? or perhaps different strategies for backfilling data that are more resilient to production load?
11 comments