Zero-downtime schema migrations in PostgreSQL: our approach and pitfalls
Chen Chowdhury
·470 views
we're currently doing zero-downtime schema migrations in postgres using a multi-deploy strategy: first, add a nullable column. then, deploy code that dual-writes to both the old and new columns. next, backfill data for the new column. finally, switch reads to the new column, and in a future deploy, drop the old column. this process works, but it's slow and prone to lock contention, especially on our largest tables with millions of rows. a recent migration took us nearly a week to complete across all stages because of backfill performance and ensuring no deadlocks. i'm curious if there are better, more efficient ways to do this. have any of you adopted tools like `gh-ost` or `pt-online-schema-change` for postgres, or developed custom tooling that makes this less painful? i'm particularly interested in approaches that minimize lock times and simplify the backfill process for large datasets.
5 comments