Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SRM experiment #82

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

Add SRM experiment #82

wants to merge 3 commits into from

Conversation

denisfouchard
Copy link
Collaborator

As suggested by @bthirion I open this separate PR for an addition of an SRM experiment.

This is co-smoothing on IBC-Contrast (same as INT), inspired by Hugo Richard's code.

Copy link
Contributor

@bthirion bthirion left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thx.

@denisfouchard
Copy link
Collaborator Author

Maybe I can do the merge now ? As it is a single file, independent of the rest and all tests are passing ?

@bthirion
Copy link
Contributor

I'd like to give Elizabeth a chance. In general it's good to have two pairs of eyes on the PR.

@emdupre
Copy link
Collaborator

emdupre commented Feb 27, 2024

Sorry for the slow follow-up ; I was sick last week and am still catching up. I'll try to take a look this afternoon

Copy link
Collaborator

@emdupre emdupre left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you, @denisfouchard ! This will be great to have !!

I have a few small comments, but my larger question is about how we're handling training and testing data. In our real-data experiments, we had used add_subjects to add in the target_train data after fitting on all of the sources_train data. Here, it seems like we're fitting on all training data (including target_train) and then extracting the target subject's basis in the fitted W. This strikes me as a little harder to follow for the naive reader (since we end up having to ignore the last image in multiple other lines like L95, L144, etc).

Could you confirm if my understanding is correct, and if there's a strong reason to do it this way ?

examples/plot_srm_alignment.py Outdated Show resolved Hide resolved
examples/plot_srm_alignment.py Outdated Show resolved Hide resolved
examples/plot_srm_alignment.py Show resolved Hide resolved
examples/plot_srm_alignment.py Show resolved Hide resolved
from nilearn.image import concat_imgs

template_train = []
for i in range(6):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why we want to have sub-07 in template_train at all, rather than only in target_train and target_test ? I know we exclude the last entry later so that it's not actually included in e.g., average_img, but that's one extra bit of overhead for the reader to keep in mind.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand ? We need the data from sub-07 in the training to get its basis, so we need to concat the imgs at some point. Maybe you mean changing the same of this list to simply imgs ? But it would be inconsistant with the INT experiment ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would be your suggestion @emdupre ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I see it would be to use add_subjects. @denisfouchard can you try and do that ?

examples/plot_srm_alignment.py Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants