Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH, MRG] Add intracranial electrode localization GUI #9586

Merged
merged 2 commits into from
Sep 17, 2021

Conversation

alexrockhill
Copy link
Contributor

@alexrockhill alexrockhill commented Jul 19, 2021

Add a GUI that allows users to find their intracranial electrode montage from a CT, an MRI and the Freesurfer recon-all from it, and an info object that supplies the names to be found and is return with a modified montage.

Closes #9559

Very rough version, no review yet.

@larsoner
Copy link
Member

a raw object that supplies the names to be found and is return with a modified montage.

It should be slightly more general to make this an instance of info that gets modified in-place.

Very rough version, no review yet.

Let me know when to look and test! Ideally you would not need to tell me what to do and I can figure it out from the docs/tutorial etc.

@alexrockhill
Copy link
Contributor Author

It should be slightly more general to make this an instance of info that gets modified in-place.

Yeah that's how it is in the code, the comments were a bit out-of-date



def test_ieeg_elec_locate_gui_display():
mne.gui.locate_ieeg(subject, subjects_dir=subjects_dir)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you micmic clicks ? I suspect this test leads to limited coverage no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, hopefully that will be done soon, I've been sorting out the API (e.g. other PR) but this is next

@alexrockhill
Copy link
Contributor Author

Still a work in progress, just for a demo, no need to review

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Jul 27, 2021

Here's a script for use on the misc data

import os.path as op
import numpy as np

import nibabel as nib
from dipy.align import resample

import mne

misc_path = mne.datasets.misc.data_path()
subject = 'sample_seeg'
subjects_dir = op.join(misc_path, 'seeg')

CT_orig = nib.load(op.join(misc_path, 'seeg', 'sample_seeg_CT.mgz'))
raw = mne.io.read_raw(op.join(misc_path, 'seeg', 'sample_seeg_ieeg.fif'))

T1 = nib.load(op.join(misc_path, 'seeg', 'sample_seeg', 'mri', 'T1.mgz'))
reg_affine = np.array([
    [0.99270756, -0.03243313, 0.11610254, -133.094156],
    [0.04374389, 0.99439665, -0.09623816, -97.58320673],
    [-0.11233068, 0.10061512, 0.98856381, -84.45551601],
    [0., 0., 0., 1.]])
CT_aligned = mne.transforms.apply_volume_registration(CT_orig, T1, reg_affine)

trans = mne.coreg.estimate_head_mri_t(subject, subjects_dir)
gui = mne.gui.locate_ieeg(raw.info, trans, CT_aligned, subject=subject, subjects_dir=subjects_dir)

@alexrockhill
Copy link
Contributor Author

I didn't find anything that helpful in mne.viz.plot_bem but I figured out the transforms by following this https://github.com/moloney/dcmstack/blob/master/src/dcmstack/dcmstack.py#L246:L252

@larsoner
Copy link
Member

larsoner commented Jul 28, 2021

I didn't find anything that helpful in mne.viz.plot_bem

It produces outputs like this:

https://mne.tools/stable/auto_tutorials/forward/30_forward.html#sphx-glr-auto-tutorials-forward-30-forward-py#compute-and-visualize-bem-surfaces

The slicing order and labeling of each axis has already been done in there somewhere. Following the trail:

  1. return _plot_mri_contours(mri_fname, surfaces, src, orientation, slices,
  2. mne-python/mne/viz/misc.py

    Lines 322 to 323 in 474da96

    (x, y, z), (flip_x, flip_y, flip_z), order = _mri_orientation(
    nim, orientation)
  3. import nibabel as nib
    _validate_type(img, nib.spatialimages.SpatialImage)
    _check_option('orientation', orientation, ('coronal', 'axial', 'sagittal'))
    axcodes = ''.join(nib.orientations.aff2axcodes(img.affine))
    flips = {o: (1 if o in axcodes else -1) for o in 'RAS'}
    axcodes = axcodes.replace('L', 'R').replace('P', 'A').replace('I', 'S')
    order = dict(
    coronal=('R', 'S', 'A'),
    axial=('R', 'A', 'S'),
    sagittal=('A', 'S', 'R'),
    )[orientation]
    xyz = tuple(axcodes.index(c) for c in order)
    flips = tuple(flips[c] for c in order)
    return xyz, flips, order

And to see how data should be manipulated to plot properly, you want to look in _plot_mri_contours, though it gets a bit complicated:

for ai, (ax, sl, lower, upper) in enumerate(zip(

Ideally you could just use _plot_mri_contours directly, for example, and improve it to have any additional functionality you need, probably via refactoring.

@larsoner
Copy link
Member

... and note that doing it this way, the affine / vox2ras / vox2ras_tkr are those of the original volume, no messing around needed. You just plot such that you're in canonical orientation, and if someone clicks, you figure out which voxel that was in the volume (based on which slice you're plotting and whether or not it's flipped) and apply the affine/vox2ras/vox2ras_tkr from the original volume.

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Jul 28, 2021

for ai, (ax, sl, lower, upper) in enumerate(zip(

Ideally you could just use _plot_mri_contours directly, for example, and improve it to have any additional functionality you need, probably via refactoring.

So I followed to that point to but what I really need is the affine transformation as you were saying. Those are all really convoluted ways of getting what nibabel.orientations.aff2axcodes and nibabel.orientations.axcodes2ornt gives you in one or two lines that should be refactored. I'm trying to focus on the GUI right now and refactoring plot_bem when it doesn't contribute to the GUI doesn't seem like a good use of time until after the main things are finished at which point I would be happy to go back.

This part is the key part that is kind of in plot_bem but it's spread over many lines of code in a very confusing way

ras_ornt = nib.orientations.axcodes2ornt('RAS')
ornt_trans = nib.orientations.ornt_transform(ornt, ras_ornt)
img_data = nib.orientations.apply_orientation(orig_data, ornt_trans)
orig_mgh = nib.MGHImage(orig_data, img.affine)
aff_trans = nib.orientations.inv_ornt_aff(ornt_trans, img.shape)
vox_ras_t = np.dot(orig_mgh.header.get_vox2ras_tkr(), aff_trans)

Again, I'd be happy to go back and refactor it in plot_bem but it's not going to help the GUI.

Just to make the point clear, flip_x which is an output of the _mri_orientation in _freesurfer.py is used four separate times. The code I need is the above and refactoring of plot_bem is going to take a lot of time and unrelated testing of the bem plotting without any crossover to the GUI.

@alexrockhill
Copy link
Contributor Author

I'll try a separate PR to refactor the BEM code so that maybe it can share that code above maybe that will make things easier

@larsoner
Copy link
Member

So I followed to that point to but what I really need is the affine transformation as you were saying... This part is the key part that is kind of in plot_bem but it's spread over many lines of code in a very confusing way

What I don't follow is what affine do you need? Why aren't you just using img.affine? What is your procedure for plotting data so that you need something else?

In the BEM code what we do is use the image's affine as is (if we need it), and then figure out how to slice into the volume to show standard orientations (sagittal, axial, coronal). To me this is a simple enough procedure and does what you should need for the GUI.

Perhaps if you describe conceptually what you do in the GUI and why you need some other affine it would help.

refactoring plot_bem when it doesn't contribute to the GUI doesn't seem like a good use of time until after the main things are finished at which point I would be happy to go back

The BEM code 1) resolves orientations so that they can be plotted sagittal/axial/coronal, 2) shows the slice number, 3) displays L/R/A/P/I/S to the sides, 4) has contours-from-surfaces built in, 5) has been unit-tested and user-tested to make sure that non-standard orientations are shown properly. These all seem like useful features for the GUI, no?

@larsoner
Copy link
Member

... maybe you're talking about the mapping from GUI click->RAS? For this I would just figure out the click->volume index i,j,k, which is easy enough to get from knowing the sag/cor/ax slice that was clicked, where it was clicked, and the flip/slicing that was used in the current plot. Once you have this volume i,j,k, you just dot it with img.affine / img.header.get_vox2ras_tkr.

@alexrockhill
Copy link
Contributor Author

What I don't follow is what affine do you need? Why aren't you just using img.affine? What is your procedure for plotting data so that you need something else?

I need an affine that goes from surface RAS to the RAS voxels of the reoriented image.

In the BEM code what we do is use the image's affine as is (if we need it), and then figure out how to slice into the volume to show standard orientations (sagittal, axial, coronal). To me this is a simple enough procedure and does what you should need for the GUI.

Perhaps if you describe conceptually what you do in the GUI and why you need some other affine it would help.

The BEM code 1) resolves orientations so that they can be plotted sagittal/axial/coronal, 2) shows the slice number, 3) displays L/R/A/P/I/S to the sides, 4) has contours-from-surfaces built in, 5) has been unit-tested and user-tested to make sure that non-standard orientations are shown properly. These all seem like useful features for the GUI, no?

Yes, that is what is needed for the GUI but the flips and coordinate frames are a mess and all of that functionality is already in the GUI.

I'll work on a PR to reorient the MRI surface first so that it's not reoriented four separate times.

@larsoner
Copy link
Member

I need an affine that goes from surface RAS to the RAS voxels of the reoriented image.

Ahh I think this is the sticking point. In the BEM code no explicit reorientation is done. The image is just sliced (and maybe flipped) as necessary to show properly oriented views. To me this is simple enough, and easy (enough) to know which voxel i/j/k is clicked, then use the standard img.affine to get RAS.

But if there is some way to reorient that doesn't actually involve resampling the image (the current code doesn't need to) and makes things simpler, that's great! (And the BEM code should benefit from this -- DRY, one right way to do a given thing, etc.!)

@alexrockhill
Copy link
Contributor Author

I think something is incompatible on Windows with the way I've implemented PyQt from the tests, it looks like there are segfaults in the tests. Any idea what's going on @larsoner or anyone else?

mne/gui/__init__.py Outdated Show resolved Hide resolved
@alexrockhill
Copy link
Contributor Author

Ok, I pushed this because I needed it for some analysis now that the coordinate frames are all worked out but it is almost feature complete.

Things still left to do:

  • Fix extra empty figure popping up from the renderer
  • Speed up initialization
  • Fix too many local maxima found (almost done)
  • Add tube for each automatically detected device with a sphere for which contact is currently selected
  • Add tests

I hope about a week away from being fully finished. We'll see though, some things can be quite finicky with the approach so we should test it on Liberty's data too and make sure everything works. In general it would be nice to avoid parameters about the contact size, spacing etc. as those will make it more confusing but they can limit the number of false positives so that takes some finagling to find a good balance.

mne/_freesurfer.py Outdated Show resolved Hide resolved
@agramfort
Copy link
Member

@alexrockhill do you need feedback from me here?

@alexrockhill
Copy link
Contributor Author

@alexrockhill do you need feedback from me here?

Now is fine or I'm adding tests so I will change a few more things and maybe I will find my own errors first over the next day or two but it is very close to being done.

@larsoner
Copy link
Member

larsoner commented Sep 8, 2021

upgrading pyvistaqt didn't seem to help ...

I would try in a fresh MNE environment created according to our current installation instructions. Also if you're on an ARM macOS machine then you probably need to use the i686 version of Anaconda rather than the native ARM one, I doubt all packages are available and working for ARM

@jasmainak
Copy link
Member

jasmainak commented Sep 13, 2021

Okay finally got this to work. Thanks @larsoner !

image

Disclaimer, I have never done this myself. So, this is mostly a naive user perspective.

  1. Why is the CT/MRI image inside two rectangles?
  2. I can't understand the bottom right plot. When I click on a marker, is it supposed to center and mark the corresponding channel in bottom right? That doesn't seem to be happening
  3. Is there a way to turn off the overlay of the CT image over the MRI? Or does it only show CT?
  4. I tried changing the color of one channel. Didn't seem to work ...
  5. Can there be a button for help instead of just a keyboard shortcut? Ideally, folks should be able to run the GUI without reading too many instructions
  6. Wasn't clear to me what "snap to center" does
  7. If I try to "jump to a location" by editing the RAS coordinates, it does not work when I put a crazy location like 150. I guess it would be more intuitive if it had up/down arrows for each coordinate and these don't go past a certain number

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Sep 15, 2021

Okay finally got this to work. Thanks @larsoner !

image

Disclaimer, I have never done this myself. So, this is mostly a naive user perspective.

  1. Why is the CT/MRI image inside two rectangles?

That's because the background is non-zero and the padding in matplotlib is zeros (black). It doesn't seem to distracting to me (you're zoomed in very close most of the time). Is it an issue to you?

  1. I can't understand the bottom right plot. When I click on a marker, is it supposed to center and mark the corresponding channel in bottom right? That doesn't seem to be happening

The bottom-right plot is view only (clicking in 3D doesn't make sense, you are clicking based on what is in the other three plots, the 3D one is mostly for identifying devices). Clicking and scrolling on the plot only moves the camera.

  1. Is there a way to turn off the overlay of the CT image over the MRI? Or does it only show CT?

The screenshot you sent is just of the CT. You can press 'b' to toggle the view of the brain. I should add back the help button though come to think of it.

  1. I tried changing the color of one channel. Didn't seem to work ...

To change the "device" of a contact, you have to change the color and then mark the contact. I guess you could change the group of unmarked contacts as well.

  1. Can there be a button for help instead of just a keyboard shortcut? Ideally, folks should be able to run the GUI without reading too many instructions

Yeah, I should add that back.

  1. Wasn't clear to me what "snap to center" does

It snaps to the center of an area of high-intensity. The technical aspects are are that it looks for a local maxima and then includes all monotonically decreasing voxels from that voxel above 50% of the maximal value but from a typical user perspective the first part is all you really need to know.

  1. If I try to "jump to a location" by editing the RAS coordinates, it does not work when I put a crazy location like 150. I guess it would be more intuitive if it had up/down arrows for each coordinate and these don't go past a certain number

If you put in something outside the range, it just resets to what it was before. That follows what Freeview does and I think it's not too hard to figure out with a few entries. It should be user-proof but I don't necessarily think you need warnings and errors for this when you can follow the existing format with small modifications to figure it out pretty easily.

@jasmainak
Copy link
Member

The bottom-right plot is view only (clicking in 3D doesn't make sense, you are clicking based on what is in the other three plots, the 3D one is mostly for identifying devices). Clicking and scrolling on the plot only moves the camera.

Sorry, I should have said when I click on the marker name on the right panel, not on the plot. I thought it would somehow highlight the marker in the bottom right corner but it does nothing

@alexrockhill
Copy link
Contributor Author

It is centered as the focal point in the view but for forward compatibility with an automated algorithm (which would also have a current selected contact) it doesn't have an indicator for the latest contact marked, you just have to go off what changed in the plot. This is because two markers would be confusing in the future when the current auto-detected contact needs to be indicated.

@larsoner
Copy link
Member

That's because the background is non-zero and the padding in matplotlib is zeros (black). It doesn't seem to distracting to me (you're zoomed in very close most of the time). Is it an issue to you?

Can you systematically use/choose the border constant value as the median of the first/last i/j/k planes np.median(np.concatenate(img[[0, -1]].ravel(), img[:, [0, -1]].ravel(), img[:, :, [0, -1].ravel()) rather than just using zero? Seems like that would be robust and look nicer.

The bottom-right plot is view only (clicking in 3D doesn't make sense, you are clicking based on what is in the other three plots, the 3D one is mostly for identifying devices). Clicking and scrolling on the plot only moves the camera.

Maybe a title for this widget "3D View" would make it clearer that this is just a view and not something that can be interacted with via clicks.

The screenshot you sent is just of the CT. You can press 'b' to toggle the view of the brain. I should add back the help button though come to think of it.

In general in UI design, anything that can be done with a keyboard shortcut should have a corresponding mouse-based control (e.g., checkbox, radio button, drop-down, etc.) when possible/reasonable. In this case I think a "Brain" checkbox or something might be good.

@larsoner
Copy link
Member

... also, aesthetically if the background of all the MRI images is black, the 3D image should also have a black background, unless it messes up the visualization of electrode colors.

@alexrockhill
Copy link
Contributor Author

Can you systematically use/choose the border constant value as the median of the first/last i/j/k planes np.median(np.concatenate(img[[0, -1]].ravel(), img[:, [0, -1]].ravel(), img[:, :, [0, -1].ravel()) rather than just using zero? Seems like that would be robust and look nicer.

-1, I think this is solving a problem that doesn't exist and it's nice to know for me the boundaries of the CT so that you can see which parts got cutoff as sometimes they do.

Maybe a title for this widget "3D View" would make it clearer that this is just a view and not something that can be interacted with via clicks.

-1 I don't think that title really makes that clearer and I think directions for that would be better placed in the tutorial. I do want to hold the users hand, but if you don't read in the instructions, I don't think it has to have more instructions in the GUI, ideally there would be as little text as possible.

In general in UI design, anything that can be done with a keyboard shortcut should have a corresponding mouse-based control (e.g., checkbox, radio button, drop-down, etc.) when possible/reasonable. In this case I think a "Brain" checkbox or something might be good.

Sure that sounds helpful.

@larsoner
Copy link
Member

-1, I think this is solving a problem that doesn't exist

Hmm... I don't like the way it looks now, and I think @jasmainak does not either, so I guess we both at least see it as an aesthetic issue.

it's nice to know for me the boundaries of the CT so that you can see which parts got cutoff as sometimes they do.

The way it's coded now, sometimes you'll have that information and other times you won't, depending on the normalization of the CT image. To me it's better to systematically fix the cval/border bug, then actually add a bounding box rectangle using a color of your choosing (red?) than rely on the cval to do this for you

@larsoner
Copy link
Member

(But I think it's okay to fix this cval business in a follow-up PR, it's easy enough and should clearly show an isolated improvement / we can discuss once it's implemented if it's actually better)

@alexrockhill
Copy link
Contributor Author

-1, I think this is solving a problem that doesn't exist

Hmm... I don't like the way it looks now, and I think @jasmainak does not either, so I guess we both at least see it as an aesthetic issue.

it's nice to know for me the boundaries of the CT so that you can see which parts got cutoff as sometimes they do.

The way it's coded now, sometimes you'll have that information and other times you won't, depending on the normalization of the CT image. To me it's better to systematically fix the cval/border bug, then actually add a bounding box rectangle using a color of your choosing (red?) than rely on the cval to do this for you

I'm fine with a bounding box, that seems like a reasonable solution, the point was just that where the image ends is important information. The CT isn't normalized though so it will usually be some non-zero low number.

getting closer to a working draft [skip ci] [skip circle]

wip

fixed some little things

wip

wip

wip

finally fixed the transform issue

wip

wip

wip

wip

wip

fix units

wip

working versions with separate plots

fix coordinate frames, refactor surface.py change to this PR

wip

add 3d plot

automatic detection wip

almost feature complete

a few fixes

attempted fix but doesn't work on ubuntu for some reason [skip ci][skip circle]

fix zoom

fix draw bug

allow no head

lots of cleaning up

fix, change to calling dipy directly

feature complete version

wip

wip

add snap feature

wip

fix tests

fix tutorial

fix flake

fix thresh

fix tests

didn't save

fix tests

fix tests

fix spelling

fix test

fix refs

small bug fixes

fix warnings

clean up plotting, fix pernicious array overwriting bug

wip

wip

C- version, finds 6 / 15 electrodes with most contacts

fix bug, add ACPC alignment back to tutorial, outline Hough transform

wip

wip

wip

fix diff

fix diff again

wip

remove auto-find for now

fix tests

wip

review, add tests

everything works great except compute_volume_registration for the MRI and CT, fixing that ASAP

fix registration, remove link, fix test

revert to older version

Eric review

increase cursor width

fix tests

fix subjects_dir

fix tutorial

fix tests

fix tests

Update mne/gui/_ieeg_locate_gui.py

Update mne/gui/tests/test_ieeg_locate_gui.py

FIX: pyvistaqt, not mayavi

STY: Cleaner

MAINT: pytest
@alexrockhill
Copy link
Contributor Author

Ok I added buttons for help and show/hide brain. You still can only use keys to scroll slices but I'm not sure there's an easy place to put those six buttons but you can always click instead of scrolling. Also zoom is only scroll button or a keypress.

I didn't address the bounding box but that would be nice to do in a followup.

@jasmainak
Copy link
Member

Sorry I got lost with the bounding box discussion but looking at the image it seemed to me that there was some affine registration issue but I might be wrong. I have seen those kinds of images when you do some registration and the corners become black because there are no pixels to copy from.

It might be worth taking inspiration from MMVT. Maybe you were also a contributor @alexrockhill ? ;-)

image

I like how in the sliced view the CT appears in a different color and makes it really easy to spot things. But I don't know if it's a standard way to do this ...

@alexrockhill
Copy link
Contributor Author

alexrockhill commented Sep 15, 2021

Sorry I got lost with the bounding box discussion but looking at the image it seemed to me that there was some affine registration issue but I might be wrong. I have seen those kinds of images when you do some registration and the corners become black because there are no pixels to copy from.

It might be worth taking inspiration from MMVT. Maybe you were also a contributor @alexrockhill ? ;-)

image

I like how in the sliced view the CT appears in a different color and makes it really easy to spot things. But I don't know if it's a standard way to do this ...

This is how it was previously in img_pipe but there were issues with this method because of setting the threshold. It's better to see the un-thresholded image to see the extent of the contact as clearly as possible. This is because the intensities in the CT vary by location and how directly the pixel that is most directly on the contact aligns with the contact.

EDIT: for MMVT, Noam was looking mostly at DBS which is usually well-spaced so there aren't bridged contacts and they come out super clean in the thresholded, blue version but with the SEEG CT data I have been working with, the bridged contacts make it so that you really need the background and no thresholding to get the full picture of the contacts. From the experience of actually having done this process for an experiment, I can tell you that the method currently on this branch works better even if it isn't as aesthetically pleasing.

Looking more closely, the black rectangle immediately surrounding the skull and the gray rectangle surrounding that are actually part of the underlying image. I really would prefer not to mess with this and the background because you wouldn't want to obscure something that might help you understand what's going on in the CT and it's bounding. The outermost black background is what happens when you zoom out of a matplotlib image so that one could be fixed but again I would strongly prefer to leave it as is.

@alexrockhill
Copy link
Contributor Author

Looks like the failure is unrelated. Maybe we can get this moving and make more improvements later

@larsoner
Copy link
Member

Agreed, okay for merge and incremental PRs after this @jasmainak ?

@alexrockhill
Copy link
Contributor Author

The plan is to do a JOSS paper after this merges so there will be a review there as well

Copy link
Member

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for MRG if CIs are all green.

@alexrockhill
Copy link
Contributor Author

I'm redoing Liberty's ECoG grids with it (I don't have any ECoG that I'm doing for my own research) and besides it not being automated, it works really well

@jasmainak
Copy link
Member

yep +1 for MRG and incremental PRs. Thanks @alexrockhill for the massive effort !

@alexrockhill
Copy link
Contributor Author

@larsoner did you mean to remove the directional labels (eg superior, inferior)? I didn't see them when I just used it.

@larsoner
Copy link
Member

@larsoner did you mean to remove the directional labels (eg superior, inferior)? I didn't see them when I just used it.

No, I don't think I meant to do it -- sorry if I did so in some commit. Good for a next PR :)

@larsoner larsoner merged commit 4375989 into mne-tools:main Sep 17, 2021
@larsoner
Copy link
Member

Thanks @alexrockhill !!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ENH] Add Intracranial Electrode Contact Picking GUI (PyQt)
7 participants