Menu
Back to Discussions

Zero-downtime schema migrations in PostgreSQL: our approach and pitfalls

Aisha Sato
Aisha Sato
·445 views
our team has a well-defined multi-stage deployment process for zero-downtime postgresql schema migrations on critical tables. it usually involves adding a nullable column, deploying code to dual-write to both old and new, backfilling data, deploying code to switch reads to the new column, and finally dropping the old column. while it works, it's slow, complex, and sometimes we run into lock contention on very large tables during the backfill or switch. we're always looking for ways to improve this. what are some of the more advanced or less painful strategies people use for zero-downtime schema migrations on high-traffic postgresql databases, especially when dealing with millions or billions of rows?
7 comments

Comments

Sign in to join the conversation.

Loading comments...