Description
I noticed that the methods in spikeinterface/preprocessing/normalize_scale.py
don't appear to keep track of a change to the gain or offset.
I understand that for most use cases it might not be useful to keep track of a change in offset (e.g., demeaning traces), but for scaling the traces, I think it would be helpful to keep track of the gain so that return_scaled=True
is consistent.
For now the following code works for me:
rec_scaled = si.scale(recording, gain=scale_factor, offset=0.0)
new_gain = recording.get_channel_gains()/scale_factor
rec_scaled .set_channel_gains(
gains=new_gain, channel_ids=recording.get_channel_ids())
Interestingly, scale_to_uV()
does update the gain and offset, so return_scaled=True
is consistent:
def scale_to_uV(recording: BasePreprocessor) -> BasePreprocessor:
[...]
scaled_to_uV_recording.set_channel_gains(gains=1.0)
scaled_to_uV_recording.set_channel_offsets(offsets=0.0)
return scaled_to_uV_recording
Nevertheless, I think it would be helpful if BaseRecording was able to keep track of how the data was scaled using any of the methods in normalize_scale.py
.
Where I think scaling is useful, and where I plan to use it, is to utilize more of the int16 bit range.
The ADC in Neuropixels 1.0 is 10-bits, and Neuropixels 2.0 is 12-bits. This means you can scale NP1 by 64x and NP2 by 16x with no worries of clipping when casting back to int16.
Given that most pre-processing is done after casting to float32, but it makes sense to cast back to int16 before writing to disk, I think scaling after pre-processing but before casting back to int16 makes a lot of sense.
What are your toughts?