Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Parcel and Searchlight Hyperalignment methods #74

Merged
merged 77 commits into from
Mar 11, 2024
Merged
Show file tree
Hide file tree
Changes from 72 commits
Commits
Show all changes
77 commits
Select commit Hold shift + click to select a range
3042dd7
add: Feilong's Hyperalignment method in TemplateAlignment
Dec 7, 2023
b538a93
adding pcha and slha
Dec 21, 2023
baa8995
edit pyproject.toml
Jan 2, 2024
c0995b2
rework hyperalignment incorporation
Jan 3, 2024
4ce6f10
fix ot/ott import problems
Jan 3, 2024
21201dd
fix: rm hyperalignment requirements and fix tests
Jan 3, 2024
15078fd
fix: doc
Jan 3, 2024
12fe4a8
fix: delete procrustes.py
Jan 3, 2024
8a14eed
fix: minor fixes to template
Jan 3, 2024
a0295ea
fix: rm dss variable (strange name)
Jan 3, 2024
16bde56
black formating
Jan 3, 2024
3d68576
fix: memory issues
Jan 3, 2024
85b848a
Merge branch 'main' into hyperalignment
denisfouchard Jan 3, 2024
b25f905
rm all refs to the hyperalignment package
Jan 3, 2024
1e3c9c1
Merge branch 'hyperalignment' of github.com:denisfouchard/fmralign in…
Jan 3, 2024
f034a55
PR adjustments
Jan 4, 2024
a0fd075
fix imports
Jan 4, 2024
fde7b3e
add docstring to int
Jan 4, 2024
5d08ebd
remove shady variable names
Jan 4, 2024
49ed356
add: toy experiment + more precise test with correlation
Jan 5, 2024
45dd245
fixed doc and argumets
Jan 5, 2024
1d431c8
add int plot and better int testing
Jan 5, 2024
a74f0a8
citing Feilong
Jan 5, 2024
40af9b8
remove caching (might add joblib caching later)
Jan 5, 2024
c788ffa
better naming and doc
Jan 5, 2024
9cb5e50
improve doc
Jan 5, 2024
f6a3950
fix tests
Jan 5, 2024
2fd46d3
Delete any reference to Feilong Ma #streisandeffect
Jan 9, 2024
e35c01b
bug fixes + rm tqdm dependency
Jan 10, 2024
4a8ddaa
bug fixes + interesting results
Jan 10, 2024
dbf694e
rm ref to ha in template alignment
Jan 12, 2024
e78747a
fix searchlight toy
Jan 15, 2024
027154b
better doc and parameters
Jan 19, 2024
de8cf24
fix nilearn depreciation
Jan 19, 2024
391d5ff
Merge branch 'hyperalignment' of https://github.com/denisfouchard/fmr…
Jan 19, 2024
0e71995
fix experiments + add return betas to ridge
Jan 19, 2024
f85b1c6
Merge branch 'hyperalignment' of github.com:denisfouchard/fmralign in…
Jan 19, 2024
9b38655
Add _ridge function for ridge regression
Jan 19, 2024
1ee469c
Fix indentation in piece_ridge function
Jan 19, 2024
29faee8
some tweaks
Feb 2, 2024
8dafa40
Merge branch 'main' into hyperalignment
denisfouchard Feb 6, 2024
890e4a1
fix INT plot and last tweaks
Feb 6, 2024
8d00cce
Merge branch 'hyperalignment' of github.com:denisfouchard/fmralign in…
Feb 6, 2024
e38a144
Merging and correcting PR remarks
Feb 13, 2024
fdebc78
Merge branch 'main' into hyperalignment
denisfouchard Feb 15, 2024
b207f61
Update examples/plot_int_alignment.py
denisfouchard Feb 15, 2024
cbbd37e
some fixes
Feb 15, 2024
6a7997a
Merge branch 'hyperalignment' of https://github.com/denisfouchard/fmr…
Feb 15, 2024
41d243a
Adressing latest PR comments
emdupre Aug 31, 2023
9b11b86
Update examples/plot_int_alignment.py
denisfouchard Feb 19, 2024
ace93e4
Update fmralign/alignment_methods.py
denisfouchard Feb 19, 2024
ac39ced
Adressing first part of reviews
Feb 19, 2024
5853656
fix more doc
Feb 19, 2024
c1fc200
fix stimulus + fix doc
Feb 19, 2024
ffd13ac
adressing plot code issues
Feb 19, 2024
eb4a8df
better variable names
Feb 19, 2024
5bc1482
delete useless srm experiment (should already be in templateAlignment…
Feb 23, 2024
8e691e4
fix linting issues
Feb 26, 2024
a9a329d
adressing other PR comments
Feb 26, 2024
4074bf9
Fix typo in flavor parameter description
Feb 26, 2024
008422e
update int tests
Feb 26, 2024
8fd6863
Better rst in alignment_methods.py
Feb 26, 2024
dad2a61
Update fmralign/hyperalignment/correlation.py
denisfouchard Feb 26, 2024
923f100
PR comments adress
Feb 26, 2024
a70b4a8
Refactor dissimilarity measure in matrix_MDS function
Feb 26, 2024
1711823
Refactor test_hyperalignment.py to include decomposition and searchli…
Feb 26, 2024
995f82a
Add toy experiment to examples
Feb 27, 2024
e033097
RM wrapper
Feb 27, 2024
70c0d00
adress PR comments
Feb 27, 2024
850a2a1
flake8
Feb 27, 2024
657233a
rm region arguments from fit method + refined doc for example
Feb 28, 2024
c5131db
fix toy example with new API
Feb 28, 2024
c2943f1
Update examples/plot_int_alignment.py
denisfouchard Mar 6, 2024
a7525b0
Update examples/plot_int_alignment.py
denisfouchard Mar 6, 2024
a19aefb
Update examples/plot_toy_int_experiment.py
denisfouchard Mar 6, 2024
3a1ccc9
Update fmralign/hyperalignment/piecewise_alignment.py
denisfouchard Mar 6, 2024
5e4b904
adressing last PR comments
denisfouchard Mar 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ per-file-ignores =
examples/*/*: D103, D205, D301, D400
# - docstrings rules that should not be applied to doc
doc/*: D100, D103, F401
ignore = D105, D107, E402, W503, W504, W605, BLK100
ignore = D105, D107, E402, W503, W504, W605, BLK100, E501
# for compatibility with black
# https://black.readthedocs.io/en/stable/guides/using_black_with_other_tools.html#flake8
extend-ignore = E203
6 changes: 5 additions & 1 deletion examples/plot_alignment_methods_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,11 @@
aligned_score = roi_masker.inverse_transform(method_error)
title = f"Correlation of prediction after {method} alignment"
display = plotting.plot_stat_map(
aligned_score, display_mode="z", cut_coords=[-15, -5], vmax=1, title=title
aligned_score,
display_mode="z",
cut_coords=[-15, -5],
vmax=1,
title=title,
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
)

###############################################################################
Expand Down
14 changes: 11 additions & 3 deletions examples/plot_alignment_simulated_2D_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,9 @@ def _plot_distributions_and_alignment(
Y = np.roll(Y, 6, axis=0)
# We plot them and observe that their initial matching is wrong
R_identity = np.eye(n_points, dtype=np.float64)
_plot_distributions_and_alignment(X, Y, R=R_identity, title="Initial Matching", thr=0.1)
_plot_distributions_and_alignment(
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
X, Y, R=R_identity, title="Initial Matching", thr=0.1
)

###############################################################################
# Alignment : finding the right transform
Expand Down Expand Up @@ -193,7 +195,9 @@ def _plot_distributions_and_alignment(
title="Procrustes between distributions",
thr=0.1,
)
_plot_mixing_matrix(R=scaled_orthogonal_alignment.R.T, title="Orthogonal mixing matrix")
_plot_mixing_matrix(
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
R=scaled_orthogonal_alignment.R.T, title="Orthogonal mixing matrix"
)

###############################################################################
# Ridge alignment
Expand All @@ -206,7 +210,11 @@ def _plot_distributions_and_alignment(
ridge_alignment = RidgeAlignment(alphas=[0.01, 0.1], cv=2).fit(X.T, Y.T)

_plot_distributions_and_alignment(
X, Y, R=ridge_alignment.R.coef_, title="Ridge between distributions", thr=0.1
X,
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
Y,
R=ridge_alignment.R.coef_,
title="Ridge between distributions",
thr=0.1,
)
_plot_mixing_matrix(R=ridge_alignment.R.coef_, title="Ridge coefficients")

Expand Down
216 changes: 216 additions & 0 deletions examples/plot_int_alignment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,216 @@
# -*- coding: utf-8 -*-

"""
Co-smoothing Prediction using the IndividualNeuralTuning Model.
See article : https://doi.org/10.1162/imag_a_00032

==========================

In this tutorial, we show how to better predict new contrasts for a target
subject using many source subjects corresponding contrasts. For this purpose,
we create a template to which we align the target subject, using shared information.
We then predict new images for the target and compare them to a baseline.

We mostly rely on Python common packages and on nilearn to handle
functional data in a clean fashion.


To run this example, you must launch IPython via ``ipython
--matplotlib`` in a terminal, or use ``jupyter-notebook``.

.. contents:: **Contents**
:local:
:depth: 1

"""
# %%
import warnings

warnings.filterwarnings("ignore")
###############################################################################
# Retrieve the data
# -----------------
# In this example we use the IBC dataset, which includes a large number of
# different contrasts maps for 12 subjects.
# We download the images for subjects sub-01, sub-02, sub-04, sub-05, sub-06
# and sub-07 (or retrieve them if they were already downloaded).
# imgs is the list of paths to available statistical images for each subjects.
# df is a dataframe with metadata about each of them.
# mask is a binary image used to extract grey matter regions.
#

from fmralign.fetch_example_data import fetch_ibc_subjects_contrasts

sub_list = ["sub-01", "sub-02", "sub-04", "sub-05", "sub-06", "sub-07"]
imgs, df, mask_img = fetch_ibc_subjects_contrasts(sub_list)

###############################################################################
# Define a masker
# -----------------
# We define a nilearn masker that will be used to handle relevant data.
# For more information, visit :
# 'http://nilearn.github.io/manipulating_images/masker_objects.html'
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
#

from nilearn.maskers import NiftiMasker

masker = NiftiMasker(mask_img=mask_img).fit()

###############################################################################
# Prepare the data
# ----------------
# For each subject, we will use two series of contrasts acquired during
# two independent sessions with a different phase encoding:
# Antero-posterior(AP) or Postero-anterior(PA).
#


# To infer a template for subjects sub-01 to sub-06 for both AP and PA data,
# we make a list of 4D niimgs from our list of list of files containing 3D images

from nilearn.image import concat_imgs

template_train = []
for i in range(6):
template_train.append(concat_imgs(imgs[i]))
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved


# For subject sub-07, we split it in two folds:
# - target train: sub-07 AP contrasts, used to learn alignment to template
# - target test: sub-07 PA contrasts, used as a ground truth to score predictions
# We make a single 4D Niimg from our list of 3D filenames
target_train = df[df.subject == "sub-07"][df.acquisition == "ap"].path.values
target_train = concat_imgs(target_train)
target_train_data = masker.transform(target_train)
target_test = df[df.subject == "sub-07"][df.acquisition == "pa"].path.values


###############################################################################
# Compute a baseline (average of subjects)
# ----------------------------------------
# We create an image with as many contrasts as any subject representing for
# each contrast the average of all train subjects maps.
#

import numpy as np

masked_imgs = [masker.transform(img) for img in template_train]
average_img = np.mean(masked_imgs[:-1], axis=0)
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
average_subject = masker.inverse_transform(average_img)

###############################################################################
# Create a template from the training subjects.
# ---------------------------------------------
# We define an estimator using the class TemplateAlignment:
# * We align the whole brain through multiple local alignments.
# * These alignments are calculated on a parcellation of the brain in 100 pieces,
# this parcellation creates group of functionnally similar voxels.
# * The template is created iteratively, aligning all subjects data into a
# common space, from which the template is inferred and aligning again to this
# new template space.
#

from nilearn.image import index_img
from fmralign.alignment_methods import IndividualizedNeuralTuning
from fmralign.hyperalignment.piecewise_alignment import PiecewiseAlignment
from fmralign.hyperalignment.regions import compute_parcels

###############################################################################
# Predict new data for left-out subject
# -------------------------------------
# We use target_train data to fit the transform, indicating it corresponds to
# the contrasts indexed by train_index and predict from this learnt alignment
# contrasts corresponding to template test_index numbers.
# For each train subject and for the template, the AP contrasts are sorted from
# 0, to 53, and then the PA contrasts from 53 to 106.
#

train_index = range(53)
test_index = range(53, 106)
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved

denoising_data = np.array(masked_imgs)[:, train_index, :]
training_data = np.array(masked_imgs)[:-1]
target_test_masked = np.array(masked_imgs)[:, test_index, :]


parcels = compute_parcels(niimg=template_train[0], mask=masker, n_parcels=100, n_jobs=5)
denoiser = PiecewiseAlignment(n_jobs=5)
denoised_signal = denoiser.fit_transform(X=denoising_data, regions=parcels)
target_denoised_data = denoised_signal[-1]
model = IndividualizedNeuralTuning(
parcels=parcels,
)
model.fit(training_data, verbose=False)
stimulus_ = np.copy(model.shared_response)
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved

# From the denoised data and the stimulus, we can now extract the tuning
# matrix from sub-07 AP contrasts, and use it to predict the PA contrasts.
target_tuning = model._tuning_estimator(
shared_response=stimulus_[train_index], target=target_denoised_data
)
# %%
# We input the mapping image target_train in a list, we could have input more
# than one subject for which we'd want to predict : [train_1, train_2 ...]


pred = model._reconstruct_signal(
shared_response=stimulus_[test_index], individual_tuning=target_tuning
)
prediction_from_template = masker.inverse_transform(pred)


# As a baseline prediction, let's just take the average of activations across subjects.

prediction_from_average = index_img(average_subject, test_index)

###############################################################################
# Score the baseline and the prediction
# -------------------------------------
# We use a utility scoring function to measure the voxelwise correlation
# between the prediction and the ground truth. That is, for each voxel, we
# measure the correlation between its profile of activation without and with
# alignment, to see if alignment was able to predict a signal more alike the ground truth.
#
# %%
from fmralign.metrics import score_voxelwise

# Now we use this scoring function to compare the correlation of predictions
# made from group average and from template with the real PA contrasts of sub-07


average_score = masker.inverse_transform(
score_voxelwise(target_test, prediction_from_average, masker, loss="corr")
)

template_score = masker.inverse_transform(
score_voxelwise(target_test, prediction_from_template, masker, loss="corr")
)


###############################################################################
# Plotting the measures
# ---------------------
# Finally we plot both scores
#

# %%
from nilearn import plotting

baseline_display = plotting.plot_stat_map(
average_score, display_mode="z", vmax=1, cut_coords=[-15, -5]
)
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
baseline_display.title("Group average correlation wt ground truth")
denisfouchard marked this conversation as resolved.
Show resolved Hide resolved
display = plotting.plot_stat_map(
template_score, display_mode="z", cut_coords=[-15, -5], vmax=1
)
display.title("INT prediction correlation wt ground truth")

###############################################################################
# We observe that creating a template and aligning a new subject to it yields
# a prediction that is better correlated with the ground truth than just using
# the average activations of subjects.
#

plotting.show()

# %%
Loading
Loading