Skip to content

Commit

Permalink
Drop indices concurrently on background updates (#18091)
Browse files Browse the repository at this point in the history
Otherwise these can race with other long running queries and lock out
all other queries.

This caused problems in v1.22.0 as we added an index to `events` table
in #17948, but that got interrupted and so next time we ran the
background update we needed to delete the half-finished index. However,
that got blocked behind some long running queries and then locked other
queries out (stopping workers from even starting).
  • Loading branch information
erikjohnston authored Jan 20, 2025
1 parent 24c4d82 commit 48db0c2
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
1 change: 1 addition & 0 deletions changelog.d/18091.bugfix
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Fix rare race where on upgrade to v1.22.0 a long running database upgrade could lock out new events from being received or sent.
4 changes: 2 additions & 2 deletions synapse/storage/background_updates.py
Original file line number Diff line number Diff line change
Expand Up @@ -789,7 +789,7 @@ def create_index_psql(conn: "LoggingDatabaseConnection") -> None:
# we may already have a half-built index. Let's just drop it
# before trying to create it again.

sql = "DROP INDEX IF EXISTS %s" % (index_name,)
sql = "DROP INDEX CONCURRENTLY IF EXISTS %s" % (index_name,)
logger.debug("[SQL] %s", sql)
c.execute(sql)

Expand All @@ -814,7 +814,7 @@ def create_index_psql(conn: "LoggingDatabaseConnection") -> None:

if replaces_index is not None:
# We drop the old index as the new index has now been created.
sql = f"DROP INDEX IF EXISTS {replaces_index}"
sql = f"DROP INDEX CONCURRENTLY IF EXISTS {replaces_index}"
logger.debug("[SQL] %s", sql)
c.execute(sql)
finally:
Expand Down

0 comments on commit 48db0c2

Please sign in to comment.