You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Working hypothesis is that Chantelle's dataset has a lot more features than previous datasets, which take up significantly more memory when initially loaded. This causes the out of memory error and may also be causing the strange behavior with the URL not updating.
I think this is what's causing it... there's an Array allocated that has 4x the elements of the other arrays but is taking up 90x the space? Note that the bounds data has 4x the elements of a feature array, but there are 91 features in the dataset...
I'm not entirely sure how to read this so I'll need to do some more digging.
Update:
Shallow size is the size of the object itself in memory.
Retained size is the size of the object + all its dependent objects.
Currently, JSArrayBufferData is taking up 187 MB, likely due to all of the parquet files being loaded into memory.
Napkin math:
206,705 objects in the dataset x 91 features x size of F32 = 94,896,620 bytes
tracks + times + centroids + bounds + outliers is 21 add'l bytes of data per object = 4,340,805 bytes
99,237,420 bytes total -> likely there's some array duplication that I can make more efficient? (or this is from the images...)
ShrimpCryptid
changed the title
Loaded datasets do not update URL
Very large datasets encounter memory issues - loaded datasets do not update URL
Dec 3, 2024
Description
A clear description of the bug
Expected Behavior
What did you expect to happen instead?
Reproduction
Environment
Any additional information about your environment. E.g. OS version, python version, browser etc.
The text was updated successfully, but these errors were encountered: