Open
Description
Hi,
Kilosort4 seems to change very quickly and I seem unable to find a combination of spikeinterface/kilosort4 versions with which I do not encounter problems (but the problems change depending on the version combination). It would be really nice to have within the docs the version of kilosort with which the version of spikeinterface has been tested with.
As for my current compatibility issue, I was trying with spikeinterface 0.102.2 and kilosort 4.0.32 (April 16th, but there's already 4 new pypi releases since then...). I have 2 problems:
- I had to change the code of spikeinterface because the API of cluster_spikes has changed
- The code attempts to allocate 150Gb of memory at the saving step (so close to working...)
The relevant code is below and the spikeinterface log is attached. As I see it, my options are the following:
- Use an older version of kilosort (that should at least solve probem 1)) and just hope that it also solves problem 2). But which version to use (I had already attempted another version pair previously with other problems) ?
- Just wait for a new version of spikeinterface and hope for the best
- Attempt to solve problem 2) by modifying the code myself, but this is not maintainable long term.
#MY MODIFICATION
clusters = cluster_spikes(**cluster_spikes_kwargs)
try:
clu, Wall = clusters
except Exception:
clu, Wall, _, _ = clusters #Kilosort now returns a 4 element tuple...
#End of my modification
if params["skip_kilosort_preprocessing"]:
ops["preprocessing"] = dict(
hp_filter=torch.as_tensor(np.zeros(1)), whiten_mat=torch.as_tensor(np.eye(recording.get_num_channels()))
)
print("Before save sorting", st.shape, clu.shape, Wall.shape) #Just to see
#This fails (trying to allocate 150Gb) as we have 15*10^6 spikes and near 400 channels
_ = save_sorting(
ops=ops,
results_dir=results_dir,
st=st,
clu=clu,
tF=tF,
Wall=Wall,
imin=bfile.imin,
tic0=tic0,
save_extra_vars=save_extra_vars,
save_preprocessed_copy=save_preprocessed_copy,
)
####This happends in kilosort save to phy line 420 of io.py
pc_features, pc_feature_ind = make_pc_features(
ops, spike_templates, spike_clusters, tF
)
#### Which calls (indirectly) get_data_cpu of clustering_qr.py
dd = torch.zeros((nspikes, nchan, nfeatures)) #Which is too big for memory