You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Client has reported that recovery of WDB in case of intraday restart is much slower when writedownmode:`partbyenum (compared to writedownmode:`default).
We should verify this (comparing against writedownmode:`partbyattr also) and see if there's anything we can do to improve recovery speed.
The text was updated successfully, but these errors were encountered:
Initial thoughts on this (after reading logs, I haven't looked at the code yet, so will update again tomorrow once I've had a look):
I would expect partbyenum and partbyattr to be significantly slower than default
On startup the wdb replays messages for the day. In this there is logic to save the data down to the wdbhdb directory if the row count surpasses a set amount (mine was the default FSP 100,000 rows).
writedownmode:`default writes to a single partition every 100,000 rows.
writedownmode:`partbyenum (and writedownmode:`partbyattr) write to several partitions every 100,000 rows. In my case I had 10 distinct symbols, so it was writing to 10 partitions every 100,000 rows.
I had two tables with 47mil rows each, and two tables with 2 tables with 9mil rows each.
default -> wrote to 1,120 partitions on recovery
partbyenum -> wrote to 11,200 partitions on recovery
partbyattr is the same in my setup, as my parted column is my sym column
According to logs, just writing 1 set of 100,000 rows:
default 0.0055 seconds
partbyenum 0.0099 seconds
This was with only 10 syms. In a more accurate setting, I can see this being significantly slower...
Client has reported that recovery of WDB in case of intraday restart is much slower when
writedownmode:`partbyenum
(compared towritedownmode:`default
).We should verify this (comparing against
writedownmode:`partbyattr
also) and see if there's anything we can do to improve recovery speed.The text was updated successfully, but these errors were encountered: