You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 25, 2024. It is now read-only.
When a client hits /messages, we try to get earlier messages from the DB. If we can't get enough, we backfill from other servers. Whenever we hit other servers, we need to make sure we protect ourselves from their bugs/problems.
If a remote server sends a duplicate event (same event ID/depth) for whatever reason, we insert the event into the syncapi_output_room_events_topology table, which correctly de-dupes based on ON CONFLICT (topological_position, room_id) DO UPDATE SET event_id = $1, but the request that initiated the backfill receives the duplicate event.
When we know we need to backfill, we should do so and then re-run the /messages request to pull from the DB, rather than return whatever events the remote server gave us, as the DB has extra checks in place like the aforementioned de-dupe logic.
This pattern of "hit remote servers to get data then re-run our internal logic which we trust" is probably something we should do in more places too!
The text was updated successfully, but these errors were encountered:
When a client hits
/messages
, we try to get earlier messages from the DB. If we can't get enough, we backfill from other servers. Whenever we hit other servers, we need to make sure we protect ourselves from their bugs/problems.If a remote server sends a duplicate event (same event ID/depth) for whatever reason, we insert the event into the
syncapi_output_room_events_topology
table, which correctly de-dupes based onON CONFLICT (topological_position, room_id) DO UPDATE SET event_id = $1
, but the request that initiated the backfill receives the duplicate event.When we know we need to backfill, we should do so and then re-run the
/messages
request to pull from the DB, rather than return whatever events the remote server gave us, as the DB has extra checks in place like the aforementioned de-dupe logic.This pattern of "hit remote servers to get data then re-run our internal logic which we trust" is probably something we should do in more places too!
The text was updated successfully, but these errors were encountered: