Replies: 1 comment 7 replies
-
Good questions. As currently designed, the memory mapping is a required step. The blow-up in size happens partially because of conversion to float: Caiman does lots of floating point operations on the images. We have had some discussion of whether we could get away with a lower bit-depth representation but haven't pushed hard on this (in particular see #1037). If your data become too large to handle, I recommend switching to the online algorithms. For a related older discussion also see #750 |
Beta Was this translation helpful? Give feedback.
7 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi. I've used the Matlab ver. of CaIman. However, I finally migrated from the Matlab version to the Python version.
I'm confused by some of the minor differences, but my main question is about memory mapping.
Is this necessary (or even preferable) if all the data to be processed can be loaded into RAM?
I've been experimenting with the source extraction by rewriting the demo_caiman_basic.py, but so far it is taking up a lot of time, storage space (17.6GB RAW tiffs vs 70.3 GB memmapped files), and the network overhead (cause my data are on network storage servers) in the memory-mapping.
As my dataset occasionally has a small number of neurons, I think the source extraction with full FOV is preferred. And these are already motion-corrected.
If I can perform the extraction without the memory-mapping, how can I do it? Otherwise, is there a better way?
I wonder if the none memmap will be achieved when I switched the cnmf_stride and rf parameters to None, but the program worked to make memory-mapped files.
Thanks to the development of a great tool and its community.
Best.
Beta Was this translation helpful? Give feedback.
All reactions