diff --git a/.github/workflows/pythonpackage.yml b/.github/workflows/pythonpackage.yml index 02e1e90..c184249 100644 --- a/.github/workflows/pythonpackage.yml +++ b/.github/workflows/pythonpackage.yml @@ -12,10 +12,11 @@ on: jobs: build: - runs-on: ubuntu-latest + runs-on: ${{ matrix.os }} strategy: matrix: - python-version: [3.8] # TODO: Add support for other python versions + os: [ubuntu-latest, macos-latest] + python-version: ['3.10'] steps: - uses: actions/checkout@v2 @@ -26,14 +27,19 @@ jobs: - name: Install dependencies run: | python -m pip install --upgrade pip - pip install -r requirements.txt - pip install --no-cache-dir -e . + pip install --user wheel + pip install --user packaging + ./install.sh - name: Lint with flake8 run: | + python -m pip install --upgrade pip + pip install --user wheel + python3 -m pip install flake8 # stop the build if there are Python syntax errors or undefined names flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics - name: Test with pytest run: | - pytest + python3 -m pip install pytest pytest-console-scripts + python3 -m pytest . diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md deleted file mode 100644 index ca30f83..0000000 --- a/ARCHITECTURE.md +++ /dev/null @@ -1,120 +0,0 @@ -# Track-to-Learn - -The overall structure of the project is - -``` -- TrackToLearn - - algorithms - - datasets - - environments - - experiment - - runners - - searchers - - trainers - - utils -- example_model -- scripts -- cc_scripts -``` - -In `TrackToLearn`, you will find the codebase for the project. In `scripts`, you will find all the scripts used to train agents in `What matters in ...`[^1] and `Incorporating anatomical priors into...`[^2]. In `cc_scripts`, you will find slurm scripts that have been used to do the architecture search of [^1]. `example_model` contains the weights and hyperparameters of an agent trained on the ISMRM2015 dataset. - - -The entry points for launching `TrackToLearn` are in `runners`, `trainers` or `searchers`. - -``` -- TrackToLearn - - runners - - ttl_track.py - - ttl_validation.py - - searchers - - a2c_searcher.py - - acktr_searcher.py - - ddpg_searcher.py - - ppo_searcher.py - - sac_auto_searcher.py - - sac_searcher.py - - td3_searcher.py - - trpo_searcher.py - - vpg_searcher.py - - trainers - - a2c_train.py - - acktr_train.py - - ddpg_train.py - - ppo_train.py - - sac_auto_train.py - - sac_train.py - - td3_train.py - - trpo_train.py - - vpg_train.py -``` - -The `runenrs` folder contains scripts for tracking either on a "dataset" (`ttl_validation.py`) or on arbitrary files (`ttl_track.py`, similarly to launching tracking in `scilpy`[^3]). These are also added to your PATH during installation. The `searchers` module contains scripts for launching an hyperparameter search for the relevant algorithm. The `trainers` module contains scripts for launching training for the relevant algorithm. - -The `algorithms` module contains several implementations of RL algorithms. - -``` -- TrackToLearn - - algorithms - - rl.py - - utils.py - - a2c.py - - acktr.py - - ddpg.py - - ppo.py - - sac_auto.py - - sac.py - - td3.py - - trpo.py - - vpg.py - - shared - - onpolicy.py - - offpolicy.py - - replay.py -``` - -The `rl` submodule contains the core of all RL algorithms implementations and most things that are relevant to all (such as the RL loop at inference, for example). The `algorithms/utils` submodule contains functions relevant to most RL algorithms. The shared submodule mostly contains classes relevant to polices and critics. Other files are implementations of RL algorithms. - -The `experiment` submodule contains core classes and functions for launching, monitoring and reproducing experiments. - -``` -- TrackToLearn - - experiment - - experiment.py - - train.py - - ttl.py -``` - -`experiment.py` contains the base class for experiments as well as most of the arguments used in entry-point scripts. `ttl.py` contains the base class for TrackToLearn experiments, which may be training or tracking or other. `train.py` contains the base class for training runs, from which "trainers" inherit. - - -The `enviroments` submodule contains everything related to RL environments. - -``` -- TrackToLearn - - environments - - env.py - - interface_tracking_env.py - - noisy_tracker.py - - reward.py - - tracker.py - - utils.py -``` - -`env.py` contains the base abstract class for environments, `BaseEnv`, in Track-to-Learn. `tracker.py` contains several concrete classes that inherit from `BaseEnv`. `interface_tracking_env.py` and `noisy_tracker.py` contain classes that inherit from classes in `tracker`. `reward.py` contains the class handling the reward function. - -Finally, the `datasets` submodule contains everything related to the creation and processing of datasets. - -``` -- TrackToLearn - - datasets - - create_dataset.py - - processing.py - - utils.py -``` - -The `create_dataset.py` script can be called to create a HDF5 containing training, validation and test subjects. `processing.py` contains util functions related to dataset creation. - -[^1]: "What matters in reinforcement learning for tractography" -[^2]: Incorporating anatomical priors into Track-to-Learn, ISMRM Workshop on Diffusion MRI: From Research to Clinic, poster #34. -[^3]: scilpy: [https://github.com/scilus/scilpy](https://github.com/scilus/scilpy) diff --git a/README.md b/README.md index 2f0beef..263b219 100644 --- a/README.md +++ b/README.md @@ -1,57 +1,51 @@ -# Track-to-Learn: A general framework for tractography with deep reinforcement learning +# Track-to-Learn/TractOracle-RL: reinforcement learning for tractography. + +TractOracle-RL is half of **TractOracle** (preprint coming), a reinforcement learning system for tractography. **TractOracle-RL** is a tractography algorithm which is trained via reinforcement learning using [TractOracle-Net](https://github.com/scil-vital/TractOracleNet). + +See [Versions](#versions) for past and current interations. ## Getting started -### Installation and setup +**Right now, only python 3.10 is supported.** -**Right now, only python 3.8 is supported.** +It is recommended to use a python [virtual environment](https://virtualenv.pypa.io/en/latest/user_guide.html) to run the code. -It is recommended to use `virtualenv` to run the code ``` bash -virtualenv .env --python=python3.8 +virtualenv .env --python=python3.10 source .env/bin/activate ``` Then, install the dependencies and setup the repo with ``` bash -# Install common requirements - -# edit requirements.txt as needed to change your torch install -pip install -r requirements.txt -# Install some specific requirements directly from git -# scilpy 1.3.0 requires a deprecated version of sklearn on pypi -SKLEARN_ALLOW_DEPRECATED_SKLEARN_PACKAGE_INSTALL=True pip install git+https://github.com/scilus/scilpy@1.3.0#egg=scilpy -pip install git+https://github.com/scil-vital/dwi_ml@70b9a97f85d295b0f03388ddb3c63b3da120ada3 -pip install git+https://github.com/scilus/ismrm_2015_tractography_challenge_scoring.git -# Load the project into the environment -pip install -e . +./install.sh ``` -TrackToLearn was developed using `torch==1.9.1` with CUDA 11. You may have to change the torch version in `requirements.txt` to suit your local installation (i.e CPU-only `torch` or using CUDA 10). +Getting errors during installation ? Open an issue ! -Still getting errors during installation ? See the wiki: [https://github.com/scil-vital/TrackToLearn/wiki/Troubleshooting](https://github.com/scil-vital/TrackToLearn/wiki/Troubleshooting) or open an issue ! +## Tracking -### Tracking - -You will need a trained agent for tracking. One is provided in the `example_model` folder. You can then track by running `ttl_track.py`. +You will need a trained agent for tracking. One is provided in the `models` folder. You can then track by running `ttl_track.py`. ``` -usage: ttl_track.py [-h] [--sh_basis {descoteaux07,tournier07}] [--compress thresh] [-f] [--save_seeds] - [--policy POLICY] [--hyperparameters HYPERPARAMETERS] [--npv NPV] [--interface] - [--min_length m] [--max_length M] [--prob sigma] [--fa_map FA_MAP] [--n_actor N] +usage: ttl_track.py [-h] [--input_wm] [--sh_basis {descoteaux07,tournier07}] + [--compress thresh] [-f] [--save_seeds] [--agent AGENT] + [--hyperparameters HYPERPARAMETERS] [--n_actor N] + [--npv NPV] [--min_length m] [--max_length M] [--prob %] + [--noise sigma] [--fa_map FA_MAP] + [--binary_stopping_threshold BINARY_STOPPING_THRESHOLD] [--rng_seed RNG_SEED] in_odf in_seed in_mask out_tractogram -``` -You will need to provide fODFs, a seeding mask and a WM mask. + Generate a tractogram from a trained model. See `--help` for usage. +``` -Agents used for tracking are constrained by their training regime. For example, the agents provided in `example_models` were trained on a volume with a resolution of 2mm iso voxels and a step size of 0.75mm using fODFs of order 6, `descoteaux07` basis. When tracking on arbitrary data, the step-size and fODF order and basis will be adjusted accordingly automatically. **However**, if using fODFs in the `tournier07` (coming from MRtrix, for example), you will need to set the `--sh_basis` argument accordingly. +You will need to provide fODFs, a seeding mask and a WM mask. The seeding mask **must** represent the interface of white matter and gray matter. _WM tracking is no longer supported._ -Other trained agents are available here: https://zenodo.org/record/7853590 +Agents used for tracking are constrained by their training regime. For example, the agents provided in `example_models` were trained on a volume with a resolution of ~1mm iso voxels and a step size of 0.75mm using fODFs of order 8, `descoteaux07` basis. When tracking on arbitrary data, the step-size and fODF order and basis will be adjusted accordingly automatically (i.e resulting in a step size of 0.375mm on 0.5mm iso diffusion data). **However**, if using fODFs in the `tournier07` basis, you will need to set the `--sh_basis` argument accordingly. -### Training +## Training First, make a dataset `.hdf5` file with `TrackToLearn/dataset/create_dataset.py`. ``` @@ -67,67 +61,79 @@ optional arguments: --normalize If set, normalize first input signal. ``` -Example datasets and config files are available here: https://zenodo.org/record/7853832 +Example datasets and config files are available here: COMING SOON -Then, you may train a PPO agent, for example, by running `python TrackToLearn/trainers/ppo_train.py`. +Then, you may train an agent by running `python TrackToLearn/trainers/sac_auto_train.py`. ``` -usage: ppo_train.py [-h] [--use_gpu] [--rng_seed RNG_SEED] [--use_comet] - [--run_tractometer] [--render] [--n_signal N_SIGNAL] - [--n_dirs N_DIRS] [--add_neighborhood ADD_NEIGHBORHOOD] - [--cmc] [--asymmetric] [--n_actor N_ACTOR] - [--hidden_dims HIDDEN_DIMS] [--load_policy LOAD_POLICY] - [--max_ep MAX_EP] [--log_interval LOG_INTERVAL] [--lr LR] - [--gamma GAMMA] - [--alignment_weighting ALIGNMENT_WEIGHTING] - [--straightness_weighting STRAIGHTNESS_WEIGHTING] - [--length_weighting LENGTH_WEIGHTING] - [--target_bonus_factor TARGET_BONUS_FACTOR] - [--exclude_penalty_factor EXCLUDE_PENALTY_FACTOR] - [--angle_penalty_factor ANGLE_PENALTY_FACTOR] - [--npv N_SEEDS_PER_VOXEL] - [--theta MAX_ANGLE] [--min_length MIN_LENGTH] - [--max_length MAX_LENGTH] [--step_size STEP_SIZE] - [--prob VALID_NOISE] [--interface_seeding] - [--no_retrack] [--entropy_loss_coeff ENTROPY_LOSS_COEFF] - [--action_std ACTION_STD] [--lmbda LMBDA] - [--K_epochs K_EPOCHS] [--eps_clip EPS_CLIP] - path experiment id dataset_file subject_id - test_dataset_file test_subject_id reference_file - scoring_data +usage: sac_auto_train.py [-h] [--workspace WORKSPACE] [--rng_seed RNG_SEED] + [--use_comet] [--n_dirs N_DIRS] + [--binary_stopping_threshold BINARY_STOPPING_THRESHOLD] + [--n_actor N_ACTOR] [--hidden_dims HIDDEN_DIMS] + [--load_agent LOAD_AGENT] [--max_ep MAX_EP] + [--log_interval LOG_INTERVAL] [--lr LR] [--gamma GAMMA] + [--alignment_weighting ALIGNMENT_WEIGHTING] [--npv NPV] + [--theta THETA] [--min_length m] [--max_length M] + [--step_size STEP_SIZE] [--prob %] [--noise sigma] + [--oracle_checkpoint ORACLE_CHECKPOINT] + [--oracle_validator] [--oracle_stopping_criterion] + [--oracle_bonus ORACLE_BONUS] + [--scoring_data SCORING_DATA] + [--tractometer_reference TRACTOMETER_REFERENCE] + [--tractometer_validator] + [--tractometer_dilate TRACTOMETER_DILATE] + [--alpha ALPHA] [--batch_size BATCH_SIZE] + [--replay_size REPLAY_SIZE] + path experiment id dataset_file +sac_auto_train.py: error: the following arguments are required: path, experiment, id, dataset_file ``` Other trainers are available in `TrackToLearn/trainers`. You can recreate an experiment by running a script in the `scripts` folder. These scripts should provide an excellent starting point for improving upon this work. You will only need to first set the `TRACK_TO_LEARN_DATA` environment variable to where you extracted the datasets (i.e. a network disk or somewhere with lots of space) and the `LOCAL_TRACK_TO_LEARN_DATA` environment variable your working folder (i.e. a faster local disk). Then, the script can be launched. -To use [Comet.ml](https://www.comet.ml/), follow instructions [here](https://www.comet.ml/docs/python-sdk/advanced/#python-configuration), with the config file either in your home folder or current folder. **Usage of comet-ml is necessary for hyperparameter search**, but this constraint should be removed in future releases. - -The option for recurrent agents is there but recurrent agents are not yet implemented. Training and validation can be performed on different subjects, but training (or validation) on multiple subjects is not yet supported. +To use [Comet.ml](https://www.comet.ml/), follow instructions [here](https://www.comet.ml/docs/python-sdk/advanced/#python-configuration), with the config file either in your home folder or current folder. **Usage of comet-ml is necessary for hyperparameter search**. This constraint may be removed in future releases. ## Contributing Contributions are welcome ! There are several TODOs sprinkled through the project which may inspire you. A lot of the code's architecure could be improved, better organized, split and reworked to make the code cleaner. Several performance improvements could also easily be added. -See `ARCHITECURE.md` for an overview of the code. +## Versions -## What matters in Reinforcement Learning for Tractography (2023) +### TractOracle-RL (2024b) -The reference commit to the `main` branch for this work is `9f97eefbbdb05a2c90ea74e8384ac2891b194a3e`. +> Théberge, A., Descoteaux, M., & Jodoin, P. M. (2024). TractOracle: towards an anatomically-informed reward function for RL-based tractography. Submitted to MICCAI 2024. -See preprint: https://arxiv.org/abs/2305.09041 +The reference commit to the `main` brain for this work is `TODO`. Please use this commit as starting point if you want to build upon Track-to-Learn (TractOracle-RL). See README above for usage. + +See preprint: https://arxiv.org/pdf/2403.17845.pdf -This version of Track-to-Learn should serve as a reference going forward to use and improve upon Track-to-Learn. Refer to the readme above for usage. +Conference paper (hopefully) coming soon. -## Incorporating anatomical priors into Track-to-Learn (2022) +### What matters in Reinforcement Learning for Tractography (2024a) -The reference commit to the master branch for this work is `dbae9305b4a3e9f21c3249121ef5dc5ed9faa899`. +> Théberge, A., Desrosiers, C., Boré, A., Descoteaux, M., & Jodoin, P. M. (2024). What matters in reinforcement learning for tractography. Medical Image Analysis, 93, 103085. + +The reference commit to the `main` branch for this work is `9f97eefbbdb05a2c90ea74e8384ac2891b194a3e`. Please use this commit as starting point if you want to reproduce or build upon the work of the 2024a paper. + +See journal paper: https://www.sciencedirect.com/science/article/pii/S1361841524000100 + +See preprint: https://arxiv.org/abs/2305.09041 + +### Incorporating anatomical priors into Track-to-Learn (2022) + +The reference commit to the main branch for this work is `dbae9305b4a3e9f21c3249121ef5dc5ed9faa899`. This work is presented at the *ISMRM Workshop on Diffusion MRI: From Research to Clinic*, poster \#34 (email me for the abstract and poster). This work adds the use of Continuous Map Criterion (CMC, https://www.sciencedirect.com/science/article/pii/S1053811914003541), asymmetric fODFs (https://archive.ismrm.org/2021/0865.html). They can be used with the `--cmc` and `--asymmetric` options respectively. Data and trained models are available here: https://zenodo.org/record/7153362 Dataset files are in `raw` and weights and results are in `experiments`. Results can be replicated using bash scripts (`sac_auto_train[_cmc|_asym|_cmc_asym].sh`) in the `scripts` folder of the code. The `DATASET_FOLDER` variable must be initialized to the folder where the `raw` and `experiments` folders are. -## Track-to-Learn (2021) +### Track-to-Learn (2021) + +> Théberge, A., Desrosiers, C., Descoteaux, M., & Jodoin, P. M. (2021). Track-to-learn: A general framework for tractography with deep reinforcement learning. Medical Image Analysis, 72, 102093. +> +> +The reference commit to the main branch for this work is `e5f2e6008e499f46af767940b5b1eec7f9293859`. See published version: https://www.sciencedirect.com/science/article/pii/S1361841521001390 @@ -137,9 +143,18 @@ A bug in the original implementation prevents the reproduction of the published ## How to cite -If you want to reference this work, please use +If you want to reference this work, please use (at least) one of ``` +@article{theberge2024matters, + title={What matters in reinforcement learning for tractography}, + author={Th{\'e}berge, Antoine and Desrosiers, Christian and Bor{\'e}, Arnaud and Descoteaux, Maxime and Jodoin, Pierre-Marc}, + journal={Medical Image Analysis}, + volume={93}, + pages={103085}, + year={2024}, + publisher={Elsevier} +} @article{theberge2021, title = {Track-to-Learn: A general framework for tractography with deep reinforcement learning}, journal = {Medical Image Analysis}, @@ -150,6 +165,5 @@ doi = {https://doi.org/10.1016/j.media.2021.102093}, url = {https://www.sciencedirect.com/science/article/pii/S1361841521001390}, author = {Antoine Théberge and Christian Desrosiers and Maxime Descoteaux and Pierre-Marc Jodoin}, keywords = {Tractography, Deep Learning, Reinforcement Learning}, -abstract = {Diffusion MRI tractography is currently the only non-invasive tool able to assess the white-matter structural connectivity of a brain. Since its inception, it has been widely documented that tractography is prone to producing erroneous tracks while missing true positive connections. Recently, supervised learning algorithms have been proposed to learn the tracking procedure implicitly from data, without relying on anatomical priors. However, these methods rely on curated streamlines that are very hard to obtain. To remove the need for such data but still leverage the expressiveness of neural networks, we introduce Track-To-Learn: A general framework to pose tractography as a deep reinforcement learning problem. Deep reinforcement learning is a type of machine learning that does not depend on ground-truth data but rather on the concept of “reward”. We implement and train algorithms to maximize returns from a reward function based on the alignment of streamlines with principal directions extracted from diffusion data. We show competitive results on known data and little loss of performance when generalizing to new, unseen data, compared to prior machine learning-based tractography algorithms. To the best of our knowledge, this is the first successful use of deep reinforcement learning for tractography.} } ``` diff --git a/TrackToLearn/algorithms/a2c.py b/TrackToLearn/algorithms/a2c.py deleted file mode 100644 index 971c5c9..0000000 --- a/TrackToLearn/algorithms/a2c.py +++ /dev/null @@ -1,187 +0,0 @@ -import numpy as np -import torch - -from collections import defaultdict -from torch import nn -from typing import Tuple - -from TrackToLearn.algorithms.vpg import VPG -from TrackToLearn.algorithms.shared.onpolicy import ActorCritic -from TrackToLearn.algorithms.shared.replay import ReplayBuffer -from TrackToLearn.algorithms.shared.utils import ( - add_item_to_means, mean_losses) - - -class A2C(VPG): - """ - The sample-gathering and training algorithm. - - Based on - Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., - ... & Kavukcuoglu, K. (2016, June). Asynchronous methods for deep - reinforcement learning. In International conference on machine learning - (pp. 1928-1937). PMLR. - - Implementation is based on these PPO implementations - - https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO_continuous.py # noqa E501 - - https://github.com/seungeunrho/minimalRL/blob/master/ppo-lstm.py - - https://github.com/openai/spinningup/blob/master/spinup/algos/pytorch/ppo/ppo.py # noqa E501 - - and PPO-specific parts were removed to obtain a simple actor-critic algorithm. - """ - - def __init__( - self, - input_size: int, - action_size: int, - hidden_dims: str, - action_std: float = 0.0, - lr: float = 3e-4, - gamma: float = 0.99, - lmbda: float = 0.99, - entropy_loss_coeff: float = 0.0001, - max_traj_length: int = 1, - n_actors: int = 4096, - rng: np.random.RandomState = None, - device: torch.device = "cuda:0", - ): - """ - Parameters - ---------- - input_size: int - Input size for the model - action_size: int - Output size for the actor - hidden_dims: str - Widths and layers of the NNs - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - lmbda: float - Lambda parameter for Generalized Advantage Estimation (GAE): - John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan: - “High-Dimensional Continuous Control Using Generalized - Advantage Estimation”, 2015; - http://arxiv.org/abs/1506.02438 arXiv:1506.02438 - entropy_loss_coeff: float - Entropy bonus for the actor loss - max_traj_length: int - Maximum trajectory length to store in memory. - n_actors: int - Number of learners - rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - """ - - self.input_size = input_size - self.action_size = action_size - - self.lr = lr - self.gamma = gamma - - self.on_policy = True - - # Declare policy - self.policy = ActorCritic( - input_size, action_size, hidden_dims, device, action_std - ).to(device) - - self.optimizer = torch.optim.Adam( - self.policy.parameters(), lr=lr) - - self.entropy_loss_coeff = entropy_loss_coeff - - # GAE Parameter - self.lmbda = lmbda - - self.max_traj_length = max_traj_length - - self.max_action = 1. - self.t = 1 - self.device = device - self.n_actors = n_actors - - # Replay buffer - self.replay_buffer = ReplayBuffer( - input_size, action_size, self.n_actors, self.max_traj_length, - self.gamma, self.lmbda) - - self.rng = rng - - def update( - self, - replay_buffer, - batch_size=4096 - ) -> Tuple[float, float]: - """ - Policy update function, where we want to maximize the probability of - good actions and minimize the probability of bad actions - - Therefore: - - actions with a high probability and positive advantage will - be made a lot more likely - - actions with a low probabiliy and positive advantage will be made - more likely - - actions with a high probability and negative advantage will be - made a lot less likely - - actions with a low probabiliy and negative advantage will be made - less likely - - Parameters - ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - - Returns - ------- - losses: dict - Dict. containing losses and training-related metrics. - """ - - # Sample replay buffer - s, a, ret, adv, *_ = \ - replay_buffer.sample() - - running_losses = defaultdict(list) - - for i in range(0, len(s), batch_size): - j = i + batch_size - - state = torch.FloatTensor(s[i:j]).to(self.device) - action = torch.FloatTensor(a[i:j]).to(self.device) - returns = torch.FloatTensor(ret[i:j]).to(self.device) - advantage = torch.FloatTensor(adv[i:j]).to(self.device) - - v, log_prob, entropy, *_ = self.policy.evaluate(state, action) - - # assert log_prob.size() == returns.size(), \ - # '{}, {}'.format(log_prob.size(), returns.size()) - - # VPG policy loss - actor_loss = -(log_prob * advantage).mean() + \ - -self.entropy_loss_coeff * entropy.mean() - - # AC Critic loss - critic_loss = ((v - returns) ** 2).mean() - - losses = {'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'entropy': entropy.mean().item(), - 'v': v.mean().item(), - 'returns': returns.mean().item(), - 'adv': advantage.mean().item()} - - running_losses = add_item_to_means(running_losses, losses) - - self.optimizer.zero_grad() - ((critic_loss * 0.5) + actor_loss).backward() - - # Gradient step - nn.utils.clip_grad_norm_(self.policy.parameters(), - 0.5) - self.optimizer.step() - - return mean_losses(running_losses) diff --git a/TrackToLearn/algorithms/acktr.py b/TrackToLearn/algorithms/acktr.py deleted file mode 100644 index a532ff1..0000000 --- a/TrackToLearn/algorithms/acktr.py +++ /dev/null @@ -1,216 +0,0 @@ -import numpy as np -import torch - -from collections import defaultdict -from typing import Tuple - -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.algorithms.shared.onpolicy import ActorCritic -from TrackToLearn.algorithms.optim import KFACOptimizer -from TrackToLearn.algorithms.shared.replay import ReplayBuffer -from TrackToLearn.algorithms.shared.utils import ( - add_item_to_means, mean_losses) - - -# TODO : ADD TYPES AND DESCRIPTION -class ACKTR(A2C): - """ - The sample-gathering and training algorithm. - - Wu, Y., Mansimov, E., Liao, S., Grosse, R., & Ba, J. (2017). - Scalable trust-region method for deep reinforcement learning using - kronecker-factored approximation. arXiv preprint arXiv:1708.05144. - - Implementation is based on - - https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/algo/a2c_acktr.py # noqa E501 - - https://github.com/alecwangcq/KFAC-Pytorch/blob/master/optimizers/kfac.py - - Some alterations have been made to the algorithms so it could be fitted to the - tractography problem. - - """ - - def __init__( - self, - input_size: int, - action_size: int, - hidden_dims: int, - action_std: float = 0.0, - lr: float = 3e-4, - gamma: float = 0.99, - lmbda: float = 0.99, - entropy_loss_coeff: float = 0.0001, - delta: float = 0.001, - max_traj_length: int = 1, - n_actors: int = 4096, - rng: np.random.RandomState = None, - device: torch.device = "cuda:0", - ): - """ - Parameters - ---------- - input_size: int - Input size for the model - action_size: int - Output size for the actor - hidden_dims: str - Widths and layers of the NNs - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - lmbda: float - Lambda parameter for Generalized Advantage Estimation (GAE): - John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan: - “High-Dimensional Continuous Control Using Generalized - Advantage Estimation”, 2015; - http://arxiv.org/abs/1506.02438 arXiv:1506.02438 - entropy_loss_coeff: float - Entropy bonus for the actor loss - delta: float - Hyperparameter for KFAC. Controls the "distance" between - the new and old policies. - max_traj_length: int - Maximum trajectory length to store in memory. - n_actors: int - Number of learners - rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - """ - - self.input_size = input_size - self.action_size = action_size - - self.lr = lr - self.gamma = gamma - - self.on_policy = True - - # Declare policy - self.policy = ActorCritic( - input_size, action_size, hidden_dims, device, action_std - ).to(device) - - # Optimizer for actor - self.optimizer = KFACOptimizer( - self.policy, lr=lr, kl_clip=delta) - - self.entropy_loss_coeff = entropy_loss_coeff - - # GAE Parameter - self.lmbda = lmbda - - self.delta = delta - - self.max_traj_length = max_traj_length - - self.max_action = 1. - self.t = 1 - self.device = device - self.n_actors = n_actors - - # Replay buffer - self.replay_buffer = ReplayBuffer( - input_size, action_size, n_actors, self.max_traj_length, - self.gamma, self.lmbda) - - self.rng = rng - - def update( - self, - replay_buffer, - batch_size: int = 8192, - ) -> Tuple[float, float]: - """ - Policy update function, where we want to maximize the probability of - good actions and minimize the probability of bad actions - - Therefore: - - actions with a high probability and positive advantage will - be made a lot more likely - - actions with a low probabiliy and positive advantage will be made - more likely - - actions with a high probability and negative advantage will be - made a lot less likely - - actions with a low probabiliy and negative advantage will be made - less likely - - ACKTR improves upon the standard policy gradient update by computing a - "trust-region", i.e. a maximum amount the policy can change at each - update. - - Parameters - ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - - Returns - ------- - losses: dict - Dict. containing losses and training-related metrics. - """ - - # Sample replay buffer - s, a, ret, adv, *_ = \ - replay_buffer.sample() - - running_losses = defaultdict(list) - - for i in range(0, len(s), batch_size): - j = i + batch_size - - state = torch.FloatTensor(s[i:j]).to(self.device) - action = torch.FloatTensor(a[i:j]).to(self.device) - returns = torch.FloatTensor(ret[i:j]).to(self.device) - advantage = torch.FloatTensor(adv[i:j]).to(self.device) - - v, log_prob, entropy, *_ = self.policy.evaluate(state, action) - - # Surrogate policy loss - assert log_prob.size() == advantage.size(), \ - '{}, {}'.format(log_prob.size(), advantage.size()) - - # Finding V Loss: - assert returns.size() == v.size(), \ - '{}, {}'.format(returns.size(), v.size()) - - # Policy loss - actor_loss = -(log_prob * advantage).mean() + \ - -self.entropy_loss_coeff * entropy.mean() - - # ACKTR critic loss - # based on ikostrikov's implementation - critic_loss = ((v - returns) ** 2).mean() - - if self.optimizer.steps % self.optimizer.Ts == 0: - self.policy.zero_grad() - pg_fisher_loss = -log_prob.mean() - - noisy_v = v + torch.randn(v.size(), device=self.device) - vf_fisher_loss = -(v - noisy_v.detach()).pow(2).mean() - - fisher_loss = pg_fisher_loss + vf_fisher_loss - - self.optimizer.acc_stats = True - fisher_loss.backward(retain_graph=True) - self.optimizer.acc_stats = False - - losses = {'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'v': v.mean().item(), - 'returns': returns.mean().item(), - 'adv': advantage.mean().item(), - 'pg_fisher_loss': pg_fisher_loss.item(), - 'vf_fisher_loss': vf_fisher_loss.item(), - 'entropy': entropy.mean().item()} - - running_losses = add_item_to_means(running_losses, losses) - - # Gradient step - self.optimizer.zero_grad() - ((critic_loss * 0.5) + actor_loss).backward() - self.optimizer.step() - - return mean_losses(running_losses) diff --git a/TrackToLearn/algorithms/ddpg.py b/TrackToLearn/algorithms/ddpg.py index b3fede3..e79eacd 100644 --- a/TrackToLearn/algorithms/ddpg.py +++ b/TrackToLearn/algorithms/ddpg.py @@ -15,7 +15,9 @@ class DDPG(RLAlgorithm): """ - Training algorithm. + NOTE: LEGACY CODE. The `_episode` function is used. The actual DDPG + learning algorithm has not been tested in a while. + Based on Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... & Wierstra, D. (2015). Continuous control with deep @@ -28,7 +30,6 @@ class DDPG(RLAlgorithm): Some alterations have been made to the algorithms so it could be fitted to the tractography problem. - """ def __init__( @@ -40,6 +41,8 @@ def __init__( lr: float = 3e-4, gamma: float = 0.99, n_actors: int = 4096, + batch_size: int = 2**12, + replay_size: int = 1e6, rng: np.random.RandomState = None, device: torch.device = "cuda:0", ): @@ -50,21 +53,24 @@ def __init__( Input size for the model action_size: int Output size for the actor - hidden_size: int - Width of the model + hidden_dims: str + Dimensions of the hidden layers for the actor and critic action_std: float - Standard deviation on actions for exploration + Standard deviation of the noise added to the actor's output lr: float - Learning rate for optimizer + Learning rate for the optimizer(s) gamma: float - Gamma parameter future reward discounting + Discount factor n_actors: int - Number of learners + Number of actors to use + batch_size: int + Batch size to sample the replay buffer + replay_size: int + Size of the replay buffer rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - Should always on GPU + Random number generator + device: torch.device + Device to train on. Should always be cuda:0 """ self.input_size = input_size @@ -75,21 +81,21 @@ def __init__( self.rng = rng # Initialize main policy - self.policy = ActorCritic( + self.agent = ActorCritic( input_size, action_size, hidden_dims, device, ) # Initialize target policy to provide baseline - self.target = copy.deepcopy(self.policy) + self.target = copy.deepcopy(self.agent) # DDPG requires a different model for actors and critics # Optimizer for actor self.actor_optimizer = torch.optim.Adam( - self.policy.actor.parameters(), lr=lr) + self.agent.actor.parameters(), lr=lr) # Optimizer for critic self.critic_optimizer = torch.optim.Adam( - self.policy.critic.parameters(), lr=lr) + self.agent.critic.parameters(), lr=lr) # DDPG-specific parameters self.action_std = action_std @@ -100,9 +106,12 @@ def __init__( self.total_it = 0 self.tau = 0.005 + self.batch_size = batch_size + self.replay_size = replay_size + # Replay buffer self.replay_buffer = OffPolicyReplayBuffer( - input_size, action_size) + input_size, action_size, max_size=replay_size) self.t = 1 self.rng = rng @@ -114,15 +123,18 @@ def sample_action( state: torch.Tensor ) -> np.ndarray: """ Sample an action according to the algorithm. + DDPG uses a deterministic policy, so no noise is added to the action + to explore. """ - # Select action according to policy + noise for exploration - a = self.policy.select_action(state) - action = ( - a + self.rng.normal( - 0, self.max_action * self.action_std, - size=a.shape) - ) + with torch.no_grad(): + # Select action according to policy + noise for exploration + a = self.agent.select_action(state) + action = ( + a + torch.normal( + 0, self.max_action * self.action_std, + size=a.shape, device=self.device) + ) return action @@ -148,32 +160,37 @@ def _episode( Returns ------- running_reward: float - Cummulative training steps reward - actor_loss: float - Policty gradient loss of actor - critic_loss: float - MSE loss of critic + Sum of rewards gathered during the episode + running_losses: dict + Dict. containing losses and training-related metrics. episode_length: int - Length of episode aka how many transitions were gathered + Length of the episode + running_reward_factors: dict + Dict. containing the factors that contributed to the reward """ running_reward = 0 state = initial_state done = False running_losses = defaultdict(list) + running_reward_factors = defaultdict(list) episode_length = 0 while not np.all(done): # Select action according to policy + noise for exploration - action = self.sample_action(state) + with torch.no_grad(): + action = self.sample_action(state) - self.t += action.shape[0] # Perform action - next_state, reward, done, _ = env.step(action) + next_state, reward, done, info = env.step( + action.to(device='cpu', copy=True).numpy()) done_bool = done + running_reward_factors = add_item_to_means( + running_reward_factors, info['reward_info']) + # Store data in replay buffer # WARNING: This is a bit of a trick and I'm not entirely sure this # is legal. This is effectively adding to the replay buffer as if @@ -183,34 +200,40 @@ def _episode( # I'm keeping it since since it reaaaally speeds up training with # no visible costs self.replay_buffer.add( - state.cpu().numpy(), action, next_state.cpu().numpy(), - reward[..., None], done_bool[..., None]) + state.to('cpu', copy=True), + action.to('cpu', copy=True), + next_state.to('cpu', copy=True), + torch.as_tensor(reward[..., None], dtype=torch.float32), + torch.as_tensor(done_bool[..., None], dtype=torch.float32)) running_reward += sum(reward) # Train agent after collecting sufficient data if self.t >= self.start_timesteps: + + batch = self.replay_buffer.sample(self.batch_size) losses = self.update( - self.replay_buffer) + batch) running_losses = add_item_to_means(running_losses, losses) + self.t += action.shape[0] + # "Harvesting" here means removing "done" trajectories # from state as well as removing the associated streamlines # This line also set the next_state as the state - state, _ = env.harvest(next_state) + state, _ = env.harvest() # Keeping track of episode length episode_length += 1 - return ( running_reward, running_losses, - episode_length) + episode_length, + running_reward_factors) def update( self, - replay_buffer: OffPolicyReplayBuffer, - batch_size: int = 4096 + batch, ) -> Tuple[float, float]: """ @@ -223,34 +246,34 @@ def update( Parameters ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - batch_size: int - Batch size to sample the memory + batch: tuple + Tuple containing the batch of data to train on, including state, + action, next_state, reward, not_done. Returns ------- losses: dict - Dict. containing losses and training-related metrics. + Dictionary containing the losses for the actor and critic and + various other metrics. """ self.total_it += 1 # Sample replay buffer state, action, next_state, reward, not_done = \ - replay_buffer.sample(batch_size) + batch with torch.no_grad(): # Select action according to policy and add noise noise = torch.randn_like(action) * (self.action_std * 2) next_action = self.target.actor(next_state) + noise - # Compute the target Q value + # Compute the target Q value using the target critic target_Q = self.target.critic( next_state, next_action) target_Q = reward + not_done * self.gamma * target_Q # Get current Q estimates - current_Q = self.policy.critic( + current_Q = self.agent.critic( state, action) # Compute critic loss @@ -262,8 +285,8 @@ def update( self.critic_optimizer.step() # Compute actor loss - actor_loss = -self.policy.critic( - state, self.policy.actor(state)).mean() + actor_loss = -self.agent.critic( + state, self.agent.actor(state)).mean() # Optimize the actor self.actor_optimizer.zero_grad() @@ -271,22 +294,22 @@ def update( self.actor_optimizer.step() losses = { - 'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'Q': current_Q.mean().item(), - 'Q\'': target_Q.mean().item(), + 'actor_loss': actor_loss.detach(), + 'critic_loss': critic_loss.detach(), + 'Q': current_Q.mean().detach(), + 'Q\'': target_Q.mean().detach(), } # Update the frozen target models for param, target_param in zip( - self.policy.critic.parameters(), + self.agent.critic.parameters(), self.target.critic.parameters() ): target_param.data.copy_( self.tau * param.data + (1 - self.tau) * target_param.data) for param, target_param in zip( - self.policy.actor.parameters(), + self.agent.actor.parameters(), self.target.actor.parameters() ): target_param.data.copy_( diff --git a/TrackToLearn/algorithms/gym_rl.py b/TrackToLearn/algorithms/gym_rl.py deleted file mode 100644 index ca15160..0000000 --- a/TrackToLearn/algorithms/gym_rl.py +++ /dev/null @@ -1,78 +0,0 @@ -import torch - -from typing import Tuple - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.environments.env import BaseEnv - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -class GymRLAlgorithm(RLAlgorithm): - - def gym_train( - self, - env: BaseEnv, - ) -> Tuple[float, float, float]: - """ - Call the main training loop - - Parameters - ---------- - env: BaseEnv - The environment actions are applied on. Provides the state fed to - the RL algorithm - - Returns - ------- - actor_loss: float - Cumulative policy training loss - critic_loss: float - Cumulative critic training loss - running_reward: float - Cummulative training steps reward - """ - - self.policy.train() - - state = env.reset() - - # Track forward - _, reward, losses, length = \ - self._episode(state, env) - - return ( - losses, - reward, - length) - - def gym_validation( - self, - env: BaseEnv, - render: bool = False, - ) -> float: - """ - Call the main loop - - Parameters - ---------- - env: BaseEnv - The environment actions are applied on. Provides the state fed to - the RL algorithm - - Returns - ------- - running_reward: float - Cummulative training steps reward - """ - # Switch policy to eval mode so no gradients are computed - self.policy.eval() - state = env.reset() - if render: - env.render() - - # Track forward - _, reward = self._validation_episode( - state, env) - - return reward diff --git a/TrackToLearn/algorithms/optim.py b/TrackToLearn/algorithms/optim.py deleted file mode 100644 index 11e3f74..0000000 --- a/TrackToLearn/algorithms/optim.py +++ /dev/null @@ -1,264 +0,0 @@ -import numpy as np -import torch - - -class KFACOptimizer(torch.optim.Optimizer): - """ - Implementation is based on - - https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail/blob/master/a2c_ppo_acktr/algo/kfac.py # noqa E501 - - https://github.com/alecwangcq/KFAC-Pytorch/blob/master/optimizers/kfac.py - - See https://www.youtube.com/watch?v=qAVZd6dHxPA for a nice explanation - - I have renamed variables, added commments and references and rearanged functions in - a way that made more sense to me. - - References: - [1] - Martens, J., & Grosse, R. (2015, June). Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning (pp. 2408-2417). PMLR. # noqa E501 - [2] - Grosse, R., & Martens, J. (2016, June). A kronecker-factored approximate fisher matrix for convolution layers. In International Conference on Machine Learning (pp. 573-582). PMLR. # noqa E501 - [3] - Wu, Y., Mansimov, E., Liao, S., Grosse, R., & Ba, J. (2017). Scalable trust-region method for deep reinforcement learning using kronecker-factored approximation. arXiv preprint arXiv:1708.05144. # noqa E501 - """ - - def __init__(self, - model, - lr=0.25, - momentum=0.9, - stat_decay=0.99, - kl_clip=0.001, - damping=1e-2, - weight_decay=0, - Ts=1, - Tf=10): - - defaults = dict(lr=lr, momentum=momentum, damping=damping, - weight_decay=weight_decay) - - super(KFACOptimizer, self).__init__(model.parameters(), defaults) - - self.optim = torch.optim.SGD( - model.parameters(), - lr=lr * (1 - momentum), - momentum=momentum) - - self.known_modules = {'Linear'} - - self.modules = [] - self.grad_outputs = {} - - self.model = model - self._prepare_model() - - self.steps = 0 - - # a = activation of module, referred to as uppercase Gamma in [2] - # m_aa = Psi/Omega* = second moment matrix of a - # s = pre-activation derivatives w.r.t loss, - # m_ss = Gamma = second moment matrix of s - # See page 5 of [2] - # d, Q = orthogonal eigendecompositions of Gamma/Psi - # See page 26 of [2] - self.m_aa, self.m_ss, self.Q_a, self.Q_s, self.d_a, self.d_s = \ - {}, {}, {}, {}, {}, {} - - self.momentum = momentum - self.stat_decay = stat_decay - - self.lr = lr - self.kl_clip = kl_clip - self.damping = damping - self.weight_decay = weight_decay - - self.Ts = Ts # statistics update period - self.Tf = Tf # inverse update period - - # I'm confused regarding the difference between Omega and Psi in [2] - # as they're both referring to the matrix of autocovariance of s - - @staticmethod - def _get_gradient(m, classname): - """ Get gradient from layer and handle bias - """ - p_grad_mat = m.weight.grad.data - if m.bias is not None: - p_grad_mat = torch.cat( - [p_grad_mat, m.bias.grad.data.view(-1, 1)], dim=1) - return p_grad_mat - - @staticmethod - def _compute_cov_a(a): - """ Compute Omega/Psi/A. See page 11 of [2] - """ - batch_size = a.size()[0] - a = torch.cat([a, a.new_ones(a.size()[0], 1)], dim=1) - return a.t() @ (a / batch_size) - - @staticmethod - def _compute_cov_s(s): - """ Compute Gamma/S. See page 11 of [2] - """ - batch_size = s.size()[0] - s_ = s * batch_size - return s_.t() @ (s_ / batch_size) - - @staticmethod - def _update_running_stat(m, M, stat_decay): - """ Update statistic S or A with moving average - """ - M *= stat_decay / (1 - stat_decay) - M += m - M *= (1 - stat_decay) - - def _save_input(self, module, inpt): - """ Build the Omega/Psi matrix - """ - if torch.is_grad_enabled() and self.steps % self.Ts == 0: - aa = self._compute_cov_a(inpt[0].data) - if self.steps == 0: - self.m_aa[module] = aa.clone() - self._update_running_stat(aa, self.m_aa[module], self.stat_decay) - - def _save_grad_output(self, module, grad_input, grad_output): - """ Build the Gamma matrix - """ - if self.acc_stats and self.steps % self.Ts == 0: - ss = self._compute_cov_s( - grad_output[0].data) - # Initialize buffers - if self.steps == 0: - self.m_ss[module] = ss.clone() - self._update_running_stat(ss, self.m_ss[module], self.stat_decay) - - def _prepare_model(self): - """ Register hooks - """ - count = 0 - for module in self.model.modules(): - classname = module.__class__.__name__ - if classname in self.known_modules: - self.modules.append(module) - module.register_forward_pre_hook(self._save_input) - module.register_backward_hook(self._save_grad_output) - count += 1 - - def _update_inv(self, m): - """Eigen decomposition for computing inverse of the fisher matrix. - Assigns the decomposition to self directly. See [2], p.26 - - Arguments - --------- - m: layer - - Returns - ------- - None - """ - - eps = 1e-10 # for numerical stability - self.d_a[m], self.Q_a[m] = torch.linalg.eigh( - self.m_aa[m]) - self.d_s[m], self.Q_s[m] = torch.linalg.eigh( - self.m_ss[m]) - - self.d_a[m].mul_((self.d_a[m] > eps).float()) - self.d_s[m].mul_((self.d_s[m] > eps).float()) - - def _get_natural_grad(self, m, p_grad_mat, damping): - """ Compute natural gradient with the trick defined - in page 26 of [2]. - - Arguments - --------- - m: layer - p_grad_mat: gradient matrix - damping: damping parameter (gamma) - - Returns - ------- - v: list of gradients w.r.t to the parameters in `m` - - """ - - v1 = self.Q_s[m].t() @ p_grad_mat @ self.Q_a[m] - v2 = v1 / (self.d_s[m].unsqueeze(1) * - self.d_a[m].unsqueeze(0) + damping) - v = self.Q_s[m] @ v2 @ self.Q_a[m].t() - if m.bias is not None: - v = [v[:, :-1], v[:, -1:]] - v[0] = v[0].view(m.weight.grad.data.size()) - v[1] = v[1].view(m.bias.grad.data.size()) - else: - v = [v.view(m.weight.grad.data.size())] - - return v - - def _kl_clip(self, updates, lr): - """ Return clipped update - """ - vg_sum = 0 - for m in self.modules: - v = updates[m] - vg_sum += (v[0] * m.weight.grad.data * lr ** 2).sum().item() - if m.bias is not None: - vg_sum += (v[1] * m.bias.grad.data * lr ** 2).sum().item() - nu = min(1.0, np.sqrt(self.kl_clip / (vg_sum + 1e-10))) - return nu - - def _update_grad(self, updates, nu): - """ Update the gradients - """ - for m in self.modules: - v = updates[m] - m.weight.grad.data.copy_(v[0]) - m.weight.grad.data.mul_(nu) - if m.bias is not None: - m.bias.grad.data.copy_(v[1]) - m.bias.grad.data.mul_(nu) - - def _step(self): - """ Apply gradients to weights - """ - for group in self.param_groups: - weight_decay = group['weight_decay'] - momentum = group['momentum'] - lr = group['lr'] - - for p in group['params']: - if p.grad is None: - continue - d_p = p.grad.data - if weight_decay != 0 and self.steps >= 20 * self.Ts: - d_p.add_(p.data, alpha=weight_decay) - if momentum != 0: - param_state = self.state[p] - if 'momentum_buffer' not in param_state: - buf = param_state['momentum_buffer'] = \ - torch.zeros_like(p.data) - buf.mul_(momentum).add_(d_p) - else: - buf = param_state['momentum_buffer'] - buf.mul_(momentum).add_(d_p) - d_p = buf - - p.data.add_(d_p, alpha=-lr) - - def step(self, closure=None): - """ Perform optimizer step - """ - - group = self.param_groups[0] - lr = group['lr'] - damping = group['damping'] - updates = {} - for m in self.modules: - classname = m.__class__.__name__ - if self.steps % self.Tf == 0: - self._update_inv(m) - p_grad_mat = self._get_gradient(m, classname) - v = self._get_natural_grad(m, p_grad_mat, damping) - updates[m] = v - nu = self._kl_clip(updates, lr) - self._update_grad(updates, nu) - - self.optim.step() - # self._step() - self.steps += 1 diff --git a/TrackToLearn/algorithms/ppo.py b/TrackToLearn/algorithms/ppo.py deleted file mode 100644 index 56f2e0d..0000000 --- a/TrackToLearn/algorithms/ppo.py +++ /dev/null @@ -1,246 +0,0 @@ -import numpy as np -import torch - -from collections import defaultdict -from torch import nn -from typing import Tuple - -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.algorithms.shared.onpolicy import ActorCritic -from TrackToLearn.algorithms.shared.replay import ReplayBuffer -from TrackToLearn.algorithms.shared.utils import ( - add_item_to_means, mean_losses) - - -# TODO : ADD TYPES AND DESCRIPTION -class PPO(A2C): - """ - The sample-gathering and training algorithm. - Based on - John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford: - “Proximal Policy Optimization Algorithms”, 2017; - http://arxiv.org/abs/1707.06347 arXiv:1707.06347 - - Implementation is based on - - https://github.com/nikhilbarhate99/PPO-PyTorch/blob/master/PPO_continuous.py # noqa E501 - - https://github.com/seungeunrho/minimalRL/blob/master/ppo-lstm.py - - https://github.com/openai/spinningup/blob/master/spinup/algos/pytorch/ppo/ppo.py # noqa E501 - - Some alterations have been made to the algorithms so it could be fitted to the - tractography problem. - - """ - - def __init__( - self, - input_size: int, - action_size: int, - hidden_dims: str, - action_std: float = 0.0, - lr: float = 3e-4, - gamma: float = 0.99, - lmbda: float = 0.99, - K_epochs: int = 80, - eps_clip: float = 0.01, - entropy_loss_coeff: float = 0.01, - max_traj_length: int = 1, - n_actors: int = 4096, - rng: np.random.RandomState = None, - device: torch.device = "cuda:0", - ): - """ - Parameters - ---------- - input_size: int - Input size for the model - action_size: int - Output size for the actor - hidden_dims: str - Widths and layers of the NNs - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - lmbda: float - Lambda parameter for Generalized Advantage Estimation (GAE): - John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan: - “High-Dimensional Continuous Control Using Generalized - Advantage Estimation”, 2015; - http://arxiv.org/abs/1506.02438 arXiv:1506.02438 - entropy_loss_coeff: float - Entropy bonus for the actor loss - K_epochs: int - How many epochs to run the optimizer using the current samples - PPO allows for many training runs on the same samples - max_traj_length: int - Maximum trajectory length to store in memory. - eps_clip: float - Clipping parameter for PPO - entropy_loss_coeff: float, - Loss coefficient on policy entropy - Should sum to 1 with other loss coefficients - n_actors: int - Number of learners. - rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - """ - - self.input_size = input_size - self.action_size = action_size - - self.lr = lr - self.gamma = gamma - - self.on_policy = True - - # Declare policy - self.policy = ActorCritic( - input_size, action_size, hidden_dims, device, action_std, - ).to(device) - - # Note the optimizer is ran on the target network's params - self.optimizer = torch.optim.Adam( - self.policy.parameters(), lr=lr) - - # GAE Parameter - self.lmbda = lmbda - - # PPO Specific parameters - self.max_traj_length = max_traj_length - self.K_epochs = K_epochs - self.lmbda = lmbda - self.eps_clip = eps_clip - self.entropy_loss_coeff = entropy_loss_coeff - - self.max_action = 1. - self.t = 1 - self.device = device - self.n_actors = n_actors - - # Replay buffer - self.replay_buffer = ReplayBuffer( - input_size, action_size, n_actors, - max_traj_length, self.gamma, self.lmbda) - - self.rng = rng - - def update( - self, - replay_buffer, - batch_size=4096, - ) -> Tuple[float, float]: - """ - Policy update function, where we want to maximize the probability of - good actions and minimize the probability of bad actions - - The general idea is to compare the current policy and the target - policies. To do so, the "ratio" is calculated by comparing the - probabilities of actions for both policies. The ratio is then - multiplied by the "advantage", which is how better than average - the policy performs. - - Therefore: - - actions with a high probability and positive advantage will - be made a lot more likely - - actions with a low probabiliy and positive advantage will be made - more likely - - actions with a high probability and negative advantage will be - made a lot less likely - - actions with a low probabiliy and negative advantage will be made - less likely - - PPO adds a twist to this where, since the advantage estimation is done - with your (potentially bad) networks, a "pessimistic view" is used - where gains will be clamped, so that high gradients (for very probable - or with a high-amplitude advantage) are tamed. This is to prevent your - network from diverging too much in the early stages - - Parameters - ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - - Returns - ------- - losses: dict - Dict. containing losses and training-related metrics. - """ - - running_losses = defaultdict(list) - - # Sample replay buffer - s, a, ret, adv, p, *_ = \ - replay_buffer.sample() - - # PPO allows for multiple gradient steps on the same data - # TODO: Should be switched with the batch ? - for _ in range(self.K_epochs): - - for i in range(0, len(s), batch_size): - # you can slice further than an array's length - j = i + batch_size - state = torch.FloatTensor(s[i:j]).to(self.device) - action = torch.FloatTensor(a[i:j]).to(self.device) - returns = torch.FloatTensor(ret[i:j]).to(self.device) - advantage = torch.FloatTensor(adv[i:j]).to(self.device) - old_prob = torch.FloatTensor(p[i:j]).to(self.device) - - # V_pi'(s) and pi'(a|s) - v, logprob, entropy, *_ = self.policy.evaluate( - state, - action) - - # Ratio between probabilities of action according to policy and - # target policies - assert logprob.size() == old_prob.size(), \ - '{}, {}'.format(logprob.size(), old_prob.size()) - ratio = torch.exp(logprob - old_prob) - - # Surrogate policy loss - assert ratio.size() == advantage.size(), \ - '{}, {}'.format(ratio.size(), advantage.size()) - - # Finding V Loss: - assert returns.size() == v.size(), \ - '{}, {}'.format(returns.size(), v.size()) - - surrogate_policy_loss_1 = ratio * advantage - surrogate_policy_loss_2 = torch.clamp( - ratio, - 1-self.eps_clip, - 1+self.eps_clip) * advantage - - # PPO "pessimistic" policy loss - actor_loss = -(torch.min( - surrogate_policy_loss_1, - surrogate_policy_loss_2)).mean() + \ - -self.entropy_loss_coeff * entropy.mean() - - # AC Critic loss - critic_loss = ((v - returns) ** 2).mean() - - losses = { - 'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'ratio': ratio.mean().item(), - 'surrogate_loss_1': surrogate_policy_loss_1.mean().item(), - 'surrogate_loss_2': surrogate_policy_loss_2.mean().item(), - 'advantage': advantage.mean().item(), - 'entropy': entropy.mean().item(), - 'ret': returns.mean().item(), - 'v': v.mean().item(), - } - - running_losses = add_item_to_means(running_losses, losses) - - self.optimizer.zero_grad() - ((critic_loss * 0.5) + actor_loss).backward() - - # Gradient step - nn.utils.clip_grad_norm_(self.policy.parameters(), - 0.5) - self.optimizer.step() - - return mean_losses(running_losses) diff --git a/TrackToLearn/algorithms/rl.py b/TrackToLearn/algorithms/rl.py index ba8bd56..5029d3b 100644 --- a/TrackToLearn/algorithms/rl.py +++ b/TrackToLearn/algorithms/rl.py @@ -59,7 +59,7 @@ def validation_episode( self, initial_state, env: BaseEnv, - compress=False, + prob: float = 1., ): """ Main loop for the algorithm @@ -88,9 +88,10 @@ def validation_episode( # Select action according to policy + noise to make tracking # probabilistic with torch.no_grad(): - action = self.policy.select_action(state) + action = self.agent.select_action(state, probabilistic=prob) # Perform action - next_state, reward, done, *_ = env.step(action) + next_state, reward, done, *_ = env.step( + action.to(device='cpu', copy=True).numpy()) # Keep track of reward running_reward += sum(reward) @@ -98,6 +99,8 @@ def validation_episode( # "Harvesting" here means removing "done" trajectories # from state. This line also set the next_state as the # state - state, _ = env.harvest(next_state) + state, _ = env.harvest() + + # env.render() return running_reward diff --git a/TrackToLearn/algorithms/sac.py b/TrackToLearn/algorithms/sac.py index 7e4ffc6..ba1d00e 100644 --- a/TrackToLearn/algorithms/sac.py +++ b/TrackToLearn/algorithms/sac.py @@ -15,9 +15,9 @@ class SAC(DDPG): Based on Haarnoja, T., Zhou, A., Abbeel, P., & Levine, S. (2018, July). Soft - actor-critic: Off-policy maximum entropy deep reinforcement learning with - a stochastic actor. In International conference on machine learning - (pp. 1861-1870). PMLR. + actor-critic: Off-policy maximum entropy deep reinforcement learning + with a stochastic actor. In International conference on machine + learning (pp. 1861-1870). PMLR. Implementation is based on Spinning Up's and rlkit @@ -38,31 +38,38 @@ def __init__( gamma: float = 0.99, alpha: float = 0.2, n_actors: int = 4096, + batch_size: int = 2**12, + replay_size: int = 1e6, rng: np.random.RandomState = None, device: torch.device = "cuda:0", ): - """ + """ Initialize the algorithm. This includes the replay buffer, + the policy and the target policy. + Parameters ---------- input_size: int Input size for the model action_size: int Output size for the actor - hidden_size: int - Width of the model + hidden_dims: str + Dimensions of the hidden layers lr: float - Learning rate for optimizer + Learning rate for the optimizer(s) gamma: float - Gamma parameter future reward discounting + Discount factor alpha: float - Parameter for entropy bonus + Entropy regularization coefficient n_actors: int - Batch size for replay buffer sampling + Number of actors to use + batch_size: int + Batch size for the update + replay_size: int + Size of the replay buffer rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - Should always on GPU + Random number generator + device: torch.device + Device to train on. Should always be cuda:0 """ self.max_action = 1. @@ -77,21 +84,21 @@ def __init__( self.rng = rng # Initialize main policy - self.policy = SACActorCritic( + self.agent = SACActorCritic( input_size, action_size, hidden_dims, device, ) # Initialize target policy to provide baseline - self.target = copy.deepcopy(self.policy) + self.target = copy.deepcopy(self.agent) # SAC requires a different model for actors and critics # Optimizer for actor self.actor_optimizer = torch.optim.Adam( - self.policy.actor.parameters(), lr=lr) + self.agent.actor.parameters(), lr=lr) # Optimizer for critic self.critic_optimizer = torch.optim.Adam( - self.policy.critic.parameters(), lr=lr) + self.agent.critic.parameters(), lr=lr) # Temperature self.alpha = alpha @@ -104,9 +111,12 @@ def __init__( self.total_it = 0 self.tau = 0.005 + self.batch_size = batch_size + self.replay_size = replay_size + # Replay buffer self.replay_buffer = OffPolicyReplayBuffer( - input_size, action_size) + input_size, action_size, max_size=replay_size) self.rng = rng @@ -118,45 +128,45 @@ def sample_action( """ # Select action according to policy + noise for exploration - action = self.policy.select_action(state, stochastic=True) + action = self.agent.select_action(state, probabilistic=1.0) return action def update( self, - replay_buffer: OffPolicyReplayBuffer, - batch_size: int = 2**12 + batch, ) -> Tuple[float, float]: """ - SAC improves upon DDPG by: - - Introducing entropy into the objective - - Using Double Q-Learning to fight overestimation + SAC improves over DDPG by introducing an entropy regularization term + in the actor loss. This encourages the policy to be more stochastic, + which improves exploration. Additionally, SAC uses the minimum of two + Q-functions in the value loss, rather than just one Q-function as in + DDPG. This helps mitigate positive value biases and makes learning more + stable. Parameters ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - batch_size: int - Batch size to sample the memory + batch: tuple + Tuple containing the batch of data to train on, including + state, action, next_state, reward, not_done. Returns ------- - running_actor_loss: float - Average policy loss over all gradient steps - running_critic_loss: float - Average critic loss over all gradient steps + losses: dict + Dictionary containing the losses for the actor and critic and + various other metrics. """ self.total_it += 1 # Sample replay buffer state, action, next_state, reward, not_done = \ - replay_buffer.sample(batch_size) + batch - pi, logp_pi = self.policy.act(state) + pi, logp_pi = self.agent.act(state) alpha = self.alpha - q1, q2 = self.policy.critic(state, pi) + q1, q2 = self.agent.critic(state, pi) q_pi = torch.min(q1, q2) # Entropy-regularized policy loss @@ -164,7 +174,7 @@ def update( with torch.no_grad(): # Target actions come from *current* policy - next_action, logp_next_action = self.policy.act(next_state) + next_action, logp_next_action = self.agent.act(next_state) # Compute the target Q value target_Q1, target_Q2 = self.target.critic( @@ -175,7 +185,7 @@ def update( (target_Q - alpha * logp_next_action) # Get current Q estimates - current_Q1, current_Q2 = self.policy.critic( + current_Q1, current_Q2 = self.agent.critic( state, action) # MSE loss against Bellman backup @@ -205,14 +215,14 @@ def update( # Update the frozen target models for param, target_param in zip( - self.policy.critic.parameters(), + self.agent.critic.parameters(), self.target.critic.parameters() ): target_param.data.copy_( self.tau * param.data + (1 - self.tau) * target_param.data) for param, target_param in zip( - self.policy.actor.parameters(), + self.agent.actor.parameters(), self.target.actor.parameters() ): target_param.data.copy_( diff --git a/TrackToLearn/algorithms/sac_auto.py b/TrackToLearn/algorithms/sac_auto.py index 6061a3f..80f860f 100644 --- a/TrackToLearn/algorithms/sac_auto.py +++ b/TrackToLearn/algorithms/sac_auto.py @@ -43,6 +43,8 @@ def __init__( gamma: float = 0.99, alpha: float = 0.2, n_actors: int = 4096, + batch_size: int = 2**12, + replay_size: int = 1e6, rng: np.random.RandomState = None, device: torch.device = "cuda:0", ): @@ -53,22 +55,24 @@ def __init__( Input size for the model action_size: int Output size for the actor - hidden_size: int - Width of the model + hidden_dims: str + Dimensions of the hidden layers lr: float - Learning rate for optimizer + Learning rate for the optimizer(s) gamma: float - Gamma parameter future reward discounting + Discount factor alpha: float - Initial value of parameter for entropy bonus. - Will get optimized. + Initial entropy coefficient (temperature). n_actors: int - Batch size for replay buffer sampling + Number of actors to use + batch_size: int + Batch size to sample the memory + replay_size: int + Size of the replay buffer rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - Should always on GPU + Random number generator + device: torch.device + Device to use for the algorithm. Should be either "cuda:0" """ self.max_action = 1. @@ -82,123 +86,134 @@ def __init__( self.rng = rng - # Initialize main policy - self.policy = SACActorCritic( + # Initialize main agent + self.agent = SACActorCritic( input_size, action_size, hidden_dims, device, ) # Auto-temperature adjustment + # SAC automatically adjusts the temperature to maximize entropy and + # thus exploration, but reduces it over time to converge to a + # somewhat deterministic policy. starting_temperature = np.log(alpha) # Found empirically self.target_entropy = -np.prod(action_size).item() self.log_alpha = torch.full( (1,), starting_temperature, requires_grad=True, device=device) - + # Optimizer for alpha self.alpha_optimizer = torch.optim.Adam( - [self.log_alpha], lr=lr) + [self.log_alpha], lr=lr) - # Initialize target policy to provide baseline - self.target = copy.deepcopy(self.policy) + # Initialize target agent to provide baseline + self.target = copy.deepcopy(self.agent) # SAC requires a different model for actors and critics # Optimizer for actor self.actor_optimizer = torch.optim.Adam( - self.policy.actor.parameters(), lr=lr) + self.agent.actor.parameters(), lr=lr) # Optimizer for critic self.critic_optimizer = torch.optim.Adam( - self.policy.critic.parameters(), lr=lr) + self.agent.critic.parameters(), lr=lr) # Temperature self.alpha = alpha # SAC-specific parameters self.max_action = 1. - self.on_policy = False + self.on_agent = False - self.start_timesteps = 1000 + self.start_timesteps = 80000 self.total_it = 0 self.tau = 0.005 + self.agent_freq = 1 + + self.batch_size = batch_size + self.replay_size = replay_size # Replay buffer self.replay_buffer = OffPolicyReplayBuffer( - input_size, action_size) + input_size, action_size, max_size=self.replay_size) self.rng = rng def update( self, - replay_buffer: OffPolicyReplayBuffer, - batch_size: int = 2**12 + batch, ) -> Tuple[float, float]: """ - SAC Auto improves upon SAC by learning the entropy coefficient - instead of making it a hyperparameter. + SAC Auto improves upon SAC by automatically adjusting the temperature + parameter alpha. This is done by optimizing the temperature parameter + alpha to maximize the entropy of the policy. This is done by + maximizing the following objective: + J_alpha = E_pi [log pi(a|s) + alpha H(pi(.|s))] + where H(pi(.|s)) is the entropy of the policy. Parameters ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - batch_size: int - Batch size to sample the memory + batch: Tuple containing the batch of data to train on. Returns ------- - running_actor_loss: float - Average policy loss over all gradient steps - running_critic_loss: float - Average critic loss over all gradient steps + losses: dict + Dictionary containing the losses of the algorithm and various + other metrics. """ self.total_it += 1 # Sample replay buffer state, action, next_state, reward, not_done = \ - replay_buffer.sample(batch_size) - - pi, logp_pi = self.policy.act(state) + batch + # Compute \pi_\theta(s_t) and log \pi_\theta(s_t) + pi, logp_pi = self.agent.act( + state, probabilistic=1.0) + # Compute the temperature loss and the temperature alpha_loss = -(self.log_alpha * ( logp_pi + self.target_entropy).detach()).mean() alpha = self.log_alpha.exp() - q1, q2 = self.policy.critic(state, pi) + # Compute the Q values and the minimum Q value + q1, q2 = self.agent.critic(state, pi) q_pi = torch.min(q1, q2) - # Entropy-regularized policy loss + # Entropy-regularized agent loss actor_loss = (alpha * logp_pi - q_pi).mean() with torch.no_grad(): - # Target actions come from *current* policy - next_action, logp_next_action = self.policy.act(next_state) + # Target actions come from *current* agent + next_action, logp_next_action = self.agent.act( + next_state, probabilistic=1.0) - # Compute the target Q value + # Compute the next Q values using the target agent target_Q1, target_Q2 = self.target.critic( next_state, next_action) target_Q = torch.min(target_Q1, target_Q2) + # Compute the backup which is the Q-learning "target" backup = reward + self.gamma * not_done * \ (target_Q - alpha * logp_next_action) # Get current Q estimates - current_Q1, current_Q2 = self.policy.critic( + current_Q1, current_Q2 = self.agent.critic( state, action) # MSE loss against Bellman backup loss_q1 = F.mse_loss(current_Q1, backup.detach()).mean() loss_q2 = F.mse_loss(current_Q2, backup.detach()).mean() - + # Total critic loss critic_loss = loss_q1 + loss_q2 losses = { - 'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'alpha_loss': alpha_loss.item(), - 'loss_q1': loss_q1.item(), - 'loss_q2': loss_q2.item(), - 'a': alpha.item(), - 'Q1': current_Q1.mean().item(), - 'Q2': current_Q2.mean().item(), - 'backup': backup.mean().item(), + # 'actor_loss': actor_loss.detach(), + # 'alpha_loss': alpha_loss.detach(), + # 'critic_loss': critic_loss.detach(), + # 'loss_q1': loss_q1.detach(), + # 'loss_q2': loss_q2.detach(), + # 'entropy': alpha.detach(), + # 'Q1': current_Q1.mean().detach(), + # 'Q2': current_Q2.mean().detach(), + # 'backup': backup.mean().detach(), } # Optimize the temperature @@ -218,14 +233,14 @@ def update( # Update the frozen target models for param, target_param in zip( - self.policy.critic.parameters(), + self.agent.critic.parameters(), self.target.critic.parameters() ): target_param.data.copy_( self.tau * param.data + (1 - self.tau) * target_param.data) for param, target_param in zip( - self.policy.actor.parameters(), + self.agent.actor.parameters(), self.target.actor.parameters() ): target_param.data.copy_( diff --git a/TrackToLearn/algorithms/shared/offpolicy.py b/TrackToLearn/algorithms/shared/offpolicy.py index f6596a6..0a0a873 100644 --- a/TrackToLearn/algorithms/shared/offpolicy.py +++ b/TrackToLearn/algorithms/shared/offpolicy.py @@ -25,6 +25,7 @@ def __init__( state_dim: int, action_dim: int, hidden_dims: str, + output_activation=nn.Tanh ): """ Parameters: @@ -33,7 +34,7 @@ def __init__( Size of input state action_dim: int Size of output action - hidden_dims: int + hidden_dims: str String representing layer widths """ @@ -46,7 +47,7 @@ def __init__( self.layers = make_fc_network( self.hidden_layers, state_dim, action_dim) - self.output_activation = nn.Tanh() + self.output_activation = output_activation() def forward(self, state: torch.Tensor) -> torch.Tensor: """ Forward propagation of the actor. @@ -76,7 +77,7 @@ def __init__( Size of input state action_dim: int Size of output action - hidden_dims: int + hidden_dims: str String representing layer widths """ @@ -90,30 +91,37 @@ def __init__( self.layers = make_fc_network( self.hidden_layers, state_dim, action_dim * 2) - self.output_activation = nn.Tanh() - def forward( self, state: torch.Tensor, - stochastic: bool, + probabilistic: float, ) -> torch.Tensor: - """ Forward propagation of the actor. - Outputs an un-noisy un-normalized action - """ + """ Forward propagation of the actor. Log probability is computed + from the Gaussian distribution of the action and correction + for the Tanh squashing is applied. + Parameters: + ----------- + state: torch.Tensor + Current state of the environment + probabilistic: float + Factor to multiply the standard deviation by when sampling. + 0 means a deterministic policy, 1 means a fully stochastic. + """ + # Compute mean and log_std from neural network. Instead of + # have two separate outputs, we have one output of size + # action_dim * 2. The first action_dim are the means, and + # the last action_dim are the log_stds. p = self.layers(state) mu = p[:, :self.action_dim] log_std = p[:, self.action_dim:] - + # Constrain log_std inside [LOG_STD_MIN, LOG_STD_MAX] log_std = torch.clamp(log_std, LOG_STD_MIN, LOG_STD_MAX) - std = torch.exp(log_std) - - pi_distribution = Normal(mu, std) - - if stochastic: - pi_action = pi_distribution.rsample() - else: - pi_action = mu + # Compute std from log_std + std = torch.exp(log_std) * probabilistic + # Sample from Gaussian distribution using reparametrization trick + pi_distribution = Normal(mu, std, validate_args=False) + pi_action = pi_distribution.rsample() # Trick from Spinning Up's implementation: # Compute logprob from Gaussian, and then apply correction for Tanh @@ -122,11 +130,13 @@ def forward( # original SAC paper (arXiv 1801.01290) and look in appendix C. # This is a more numerically-stable equivalent to Eq 21. logp_pi = pi_distribution.log_prob(pi_action).sum(axis=-1) + # Squash correction logp_pi -= (2*(np.log(2) - pi_action - F.softplus(-2*pi_action))).sum(axis=1) + # Run actions through tanh to get -1, 1 range pi_action = self.output_activation(pi_action) - + # Return action and logprob return pi_action, logp_pi @@ -181,6 +191,7 @@ def __init__( state_dim: int, action_dim: int, hidden_dims: str, + critic_size_factor=1, ): """ Parameters: @@ -189,14 +200,15 @@ def __init__( Size of input state action_dim: int Size of output action - hidden_dims: int + hidden_dims: str String representing layer widths """ super(DoubleCritic, self).__init__( state_dim, action_dim, hidden_dims) - self.hidden_layers = format_widths(hidden_dims) + self.hidden_layers = format_widths( + hidden_dims) * critic_size_factor self.q1 = make_fc_network( self.hidden_layers, state_dim + action_dim, 1) @@ -250,7 +262,7 @@ def __init__( """ self.device = device self.actor = Actor( - state_dim, action_dim, hidden_dims, + state_dim, action_dim, hidden_dims ).to(device) self.critic = Critic( @@ -272,7 +284,7 @@ def act(self, state: torch.Tensor) -> torch.Tensor: """ return self.actor(state) - def select_action(self, state: np.array, stochastic=False) -> np.ndarray: + def select_action(self, state: np.array, probabilistic=0.0) -> np.ndarray: """ Move state to torch tensor, select action and move it back to numpy array @@ -280,6 +292,8 @@ def select_action(self, state: np.array, stochastic=False) -> np.ndarray: ----------- state: np.array State of the environment + probabilistic: float + Unused as TD3 does not use probabilistic actions. Returns: -------- @@ -289,8 +303,7 @@ def select_action(self, state: np.array, stochastic=False) -> np.ndarray: # if state is not batched, expand it to "batch of 1" if len(state.shape) < 2: state = state[None, :] - state = torch.as_tensor(state, dtype=torch.float32, device=self.device) - action = self.act(state).cpu().data.numpy() + action = self.act(state) return action @@ -357,7 +370,9 @@ def train(self): class TD3ActorCritic(ActorCritic): - """ Module that handles the actor and the critic + """ Module that handles the actor and the critic for TD3 + The actor is the same as the DDPG actor, but the critic is different. + """ def __init__( @@ -374,8 +389,9 @@ def __init__( Size of input state action_dim: int Size of output action - hidden_dims: int + hidden_dims: str String representing layer widths + device: torch.device """ self.device = device @@ -406,9 +422,9 @@ def __init__( Size of input state action_dim: int Size of output action - hidden_dim: int - Width of network. Presumes all intermediary - layers are of same size for simplicity + hidden_dims: str + String representing layer widths + device: torch.device """ self.device = device @@ -420,30 +436,37 @@ def __init__( state_dim, action_dim, hidden_dims, ).to(device) - def act(self, state: torch.Tensor, stochastic=True) -> torch.Tensor: + def act(self, state: torch.Tensor, probabilistic=1.0) -> torch.Tensor: """ Select action according to actor Parameters: ----------- state: torch.Tensor Current state of the environment + probabilistic: float + Factor to multiply the standard deviation by when sampling + actions. Returns: -------- action: torch.Tensor Action selected by the policy + logprob: torch.Tensor + Log probability of the action """ - action, logprob = self.actor(state, stochastic) + action, logprob = self.actor(state, probabilistic) return action, logprob - def select_action(self, state: np.array, stochastic=False) -> np.ndarray: - """ Move state to torch tensor, select action and - move it back to numpy array + def select_action(self, state: np.array, probabilistic=1.0) -> np.ndarray: + """ Act on a state and return an action. Parameters: ----------- state: np.array State of the environment + probabilistic: float + Factor to multiply the standard deviation by when sampling + actions. Returns: -------- @@ -454,7 +477,6 @@ def select_action(self, state: np.array, stochastic=False) -> np.ndarray: if len(state.shape) < 2: state = state[None, :] - state = torch.as_tensor(state, dtype=torch.float32, device=self.device) - action, _ = self.act(state, stochastic) + action, _ = self.act(state, probabilistic) - return action.cpu().data.numpy() + return action diff --git a/TrackToLearn/algorithms/shared/onpolicy.py b/TrackToLearn/algorithms/shared/onpolicy.py deleted file mode 100644 index a20b0b8..0000000 --- a/TrackToLearn/algorithms/shared/onpolicy.py +++ /dev/null @@ -1,420 +0,0 @@ -import numpy as np -import torch - -from os.path import join as pjoin -from torch import nn -from torch.distributions.normal import Normal -from typing import Tuple - -from TrackToLearn.algorithms.shared.utils import ( - format_widths, make_fc_network) - - -class Actor(nn.Module): - """ Actor module that takes in a state and outputs an action. - Its policy is the neural network layers - """ - - def __init__( - self, - state_dim: int, - action_dim: int, - hidden_dims: str, - device: torch.device, - action_std: float = 0.0, - ): - """ - Parameters: - ----------- - state_dim: int - Size of input state - action_dim: int - Size of output action - hidden_dims: str - String representing layer widths - - """ - super(Actor, self).__init__() - - self.hidden_layers = format_widths(hidden_dims) - - self.layers = make_fc_network( - self.hidden_layers, state_dim, action_dim, activation=nn.Tanh) - - # State-independent STD, as opposed to SAC which uses a - # state-dependent STD. - # See https://spinningup.openai.com/en/latest/algorithms/sac.html - # in the "You Should Know" box - log_std = -action_std * np.ones(action_dim, dtype=np.float32) - self.log_std = nn.Parameter(torch.as_tensor(log_std)) - - def _mu(self, state: torch.Tensor): - return self.layers(state) - - def _distribution(self, state: torch.Tensor): - mu = self._mu(state) - std = torch.exp(self.log_std) - try: - dist = Normal(mu, std) - except ValueError as e: - print(mu, std) - raise e - - return dist - - def forward(self, state: torch.Tensor) -> torch.Tensor: - """ Forward propagation of the actor. - Outputs an un-noisy un-normalized action - """ - return self._distribution(state) - - -class PolicyGradient(nn.Module): - """ PolicyGradient module that handles actions - """ - - def __init__( - self, - state_dim: int, - action_dim: int, - hidden_dims: str, - device: torch.device, - action_std: float = 0.0, - ): - super(PolicyGradient, self).__init__() - self.device = device - self.action_dim = action_dim - - self.actor = Actor( - state_dim, action_dim, hidden_dims, action_std, - ).to(device) - - def act( - self, state: torch.Tensor, stochastic: bool = True, - ) -> torch.Tensor: - """ Select noisy action according to actor - """ - pi = self.actor.forward(state) - # Should always be stochastic - if stochastic: - action = pi.sample() # if stochastic else pi.mean - else: - action = pi.mean - - return action - - def evaluate( - self, state: torch.Tensor, action: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ Get output of value function for the actions, as well as - logprob of actions and entropy of policy for loss - """ - - pi = self.actor(state) - mu, std = pi.mean, pi.stddev - action_logprob = pi.log_prob(action).sum(axis=-1) - entropy = pi.entropy() - - return action_logprob, entropy, mu, std - - def select_action( - self, state: np.array, stochastic=True, - ) -> np.ndarray: - """ Move state to torch tensor, select action and - move it back to numpy array - - Parameters: - ----------- - state: np.array - State of the environment - - Returns: - -------- - action: np.array - Action selected by the policy - """ - - if len(state.shape) < 2: - state = state[None, :] - - state = torch.as_tensor(state, dtype=torch.float32, device=self.device) - action = self.act(state, stochastic).cpu().data.numpy() - - return action - - def get_evaluation( - self, state: np.array, action: np.array - ) -> Tuple[np.array, np.array, np.array]: - """ Move state and action to torch tensor, - get value estimates for states, probabilities of actions - and entropy for action distribution, then move everything - back to numpy array - - Parameters: - ----------- - state: np.array - State of the environment - action: np.array - Actions taken by the policy - - Returns: - -------- - v: np.array - Value estimates for state - prob: np.array - Probabilities of actions - entropy: np.array - Entropy of policy - """ - - if len(state.shape) < 2: - state = state[None, :] - if len(action.shape) < 2: - action = action[None, :] - - state = torch.as_tensor(state, dtype=torch.float32, device=self.device) - action = torch.as_tensor( - action, dtype=torch.float32, device=self.device) - - prob, entropy, mu, std = self.evaluate(state, action) - - # REINFORCE does not use a critic - values = np.zeros((state.size()[0])) - - return ( - values, - prob.cpu().data.numpy(), - entropy.cpu().data.numpy(), - mu.cpu().data.numpy(), - std.cpu().data.numpy()) - - def load_state_dict(self, state_dict): - """ Load parameters into actor and critic - """ - actor_state_dict = state_dict - self.actor.load_state_dict(actor_state_dict) - - def state_dict(self): - """ Returns state dicts so they can be loaded into another policy - """ - return self.actor.state_dict() - - def save(self, path: str, filename: str): - """ Save policy at specified path and filename - Parameters: - ----------- - path: string - Path to folder that will contain saved state dicts - filename: string - Name of saved models. Suffixes for actors and critics - will be appended - """ - torch.save( - self.actor.state_dict(), pjoin(path, filename + "_actor.pth")) - - def load(self, path: str, filename: str): - """ Load policy from specified path and filename - Parameters: - ----------- - path: string - Path to folder containing saved state dicts - filename: string - Name of saved models. Suffixes for actors and critics - will be appended - """ - self.actor.load_state_dict( - torch.load(pjoin(path, filename + '_actor.pth'), - map_location=self.device)) - - def eval(self): - """ Switch actors and critics to eval mode - """ - self.actor.eval() - - def train(self): - """ Switch actors and critics to train mode - """ - self.actor.train() - - -class Critic(nn.Module): - """ Critic module that takes in a pair of state-action and outputs its - q-value according to the network's q function. TD3 uses two critics - and takes the lowest value of the two during backprop. - """ - - def __init__( - self, - state_dim: int, - action_dim: int, - hidden_dims: int, - ): - """ - Parameters: - ----------- - state_dim: int - Size of input state - action_dim: int - Size of output action - hidden_dim: int - Width of network. Presumes all intermediary - layers are of same size for simplicity - - """ - super(Critic, self).__init__() - - self.hidden_layers = format_widths(hidden_dims) - - self.layers = make_fc_network( - self.hidden_layers, state_dim, 1, activation=nn.Tanh) - - def forward(self, state) -> torch.Tensor: - """ Forward propagation of the actor. - Outputs a q estimate from first critic - """ - - return self.layers(state) - - -class ActorCritic(PolicyGradient): - """ Actor-Critic module that handles both actions and values - Actors and critics here don't share a body but do share a loss - function. Therefore they are both in the same module - """ - - def __init__( - self, - state_dim: int, - action_dim: int, - hidden_dims: str, - device: torch.device, - action_std: float = 0.0, - ): - super(ActorCritic, self).__init__( - state_dim, - action_dim, - hidden_dims, - device, - action_std - ) - - self.critic = Critic( - state_dim, action_dim, hidden_dims, - ).to(self.device) - - print(self) - - def evaluate( - self, state: torch.Tensor, action: torch.Tensor - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ Get output of value function for the actions, as well as - logprob of actions and entropy of policy for loss - """ - - pi = self.actor.forward(state) - mu, std = pi.mean, pi.stddev - action_logprob = pi.log_prob(action).sum(axis=-1) - entropy = pi.entropy() - values = self.critic(state).squeeze(-1) - - return values, action_logprob, entropy, mu, std - - def get_evaluation( - self, state: np.array, action: np.array - ) -> Tuple[np.array, np.array, np.array]: - """ Move state and action to torch tensor, - get value estimates for states, probabilities of actions - and entropy for action distribution, then move everything - back to numpy array - - Parameters: - ----------- - state: np.array - State of the environment - action: np.array - Actions taken by the policy - - Returns: - -------- - v: np.array - Value estimates for state - prob: np.array - Probabilities of actions - entropy: np.array - Entropy of policy - """ - - if len(state.shape) < 2: - state = state[None, :] - if len(action.shape) < 2: - action = action[None, :] - - - state = torch.as_tensor(state, dtype=torch.float32, device=self.device) - action = torch.as_tensor( - action, dtype=torch.float32, device=self.device) - - v, prob, entropy, mu, std = self.evaluate(state, action) - - return ( - v.cpu().data.numpy(), - prob.cpu().data.numpy(), - entropy.cpu().data.numpy(), - mu.cpu().data.numpy(), - std.cpu().data.numpy()) - - def load_state_dict(self, state_dict): - """ Load parameters into actor and critic - """ - actor_state_dict, critic_state_dict = state_dict - self.actor.load_state_dict(actor_state_dict) - self.critic.load_state_dict(critic_state_dict) - - def state_dict(self): - """ Returns state dicts so they can be loaded into another policy - """ - return self.actor.state_dict(), self.critic.state_dict() - - def save(self, path: str, filename: str): - """ Save policy at specified path and filename - Parameters: - ----------- - path: string - Path to folder that will contain saved state dicts - filename: string - Name of saved models. Suffixes for actors and critics - will be appended - """ - torch.save( - self.critic.state_dict(), pjoin(path, filename + "_critic.pth")) - torch.save( - self.actor.state_dict(), pjoin(path, filename + "_actor.pth")) - - def load(self, path: str, filename: str): - """ Load policy from specified path and filename - Parameters: - ----------- - path: string - Path to folder containing saved state dicts - filename: string - Name of saved models. Suffixes for actors and critics - will be appended - """ - self.critic.load_state_dict( - torch.load(pjoin(path, filename + '_critic.pth'), - map_location=self.device)) - self.actor.load_state_dict( - torch.load(pjoin(path, filename + '_actor.pth'), - map_location=self.device)) - - def eval(self): - """ Switch actors and critics to eval mode - """ - self.actor.eval() - self.critic.eval() - - def train(self): - """ Switch actors and critics to train mode - """ - self.actor.train() - self.critic.train() diff --git a/TrackToLearn/algorithms/shared/replay.py b/TrackToLearn/algorithms/shared/replay.py index 786da91..c55fde4 100644 --- a/TrackToLearn/algorithms/shared/replay.py +++ b/TrackToLearn/algorithms/shared/replay.py @@ -1,5 +1,4 @@ import numpy as np -import scipy.signal import torch from typing import Tuple @@ -8,264 +7,6 @@ device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -class ReplayBuffer(object): - """ Replay buffer to store transitions. Efficiency could probably be - improved. - - While it is called a ReplayBuffer, it is not actually one as no "Replay" - is performed. As it is used by on-policy algorithms, the buffer should - be cleared every time it is sampled. - - TODO: Add possibility to save and load to disk for imitation learning - """ - - def __init__( - self, state_dim: int, action_dim: int, n_trajectories: int, - max_traj_length: int, gamma: float, lmbda: float = 0.95 - ): - """ - Parameters: - ----------- - state_dim: int - Size of states - action_dim: int - Size of actions - n_trajectories: int - Number of learned accumulating transitions - max_traj_length: int - Maximum length of trajectories - gamma: float - Discount factor. - lmbda: float - GAE factor. - """ - self.ptr = 0 - - self.n_trajectories = n_trajectories - self.max_traj_length = max_traj_length - self.device = device - self.lens = np.zeros((n_trajectories), dtype=np.int32) - self.gamma = gamma - self.lmbda = lmbda - self.state_dim = state_dim - self.action_dim = action_dim - - # RL Buffers "filled with zeros" - self.state = np.zeros(( - self.n_trajectories, self.max_traj_length, self.state_dim)) - self.action = np.zeros(( - self.n_trajectories, self.max_traj_length, self.action_dim)) - self.next_state = np.zeros(( - self.n_trajectories, self.max_traj_length, self.state_dim)) - self.reward = np.zeros((self.n_trajectories, self.max_traj_length)) - self.not_done = np.zeros((self.n_trajectories, self.max_traj_length)) - self.values = np.zeros((self.n_trajectories, self.max_traj_length)) - self.next_values = np.zeros( - (self.n_trajectories, self.max_traj_length)) - self.probs = np.zeros((self.n_trajectories, self.max_traj_length)) - self.mus = np.zeros( - (self.n_trajectories, self.max_traj_length, self.action_dim)) - self.stds = np.zeros( - (self.n_trajectories, self.max_traj_length, self.action_dim)) - - # GAE buffers - self.ret = np.zeros((self.n_trajectories, self.max_traj_length)) - self.adv = np.zeros((self.n_trajectories, self.max_traj_length)) - - def add( - self, - ind: np.ndarray, - state: np.ndarray, - action: np.ndarray, - next_state: np.ndarray, - reward: np.ndarray, - done: np.ndarray, - values: np.ndarray, - next_values: np.ndarray, - probs: np.ndarray, - mus: np.ndarray, - stds: np.ndarray, - ): - """ Add new transitions to buffer in a "ring buffer" way - - Parameters: - ----------- - state: np.ndarray - Batch of states to be added to buffer - action: np.ndarray - Batch of actions to be added to buffer - next_state: np.ndarray - Batch of next-states to be added to buffer - reward: np.ndarray - Batch of rewards obtained for this transition - done: np.ndarray - Batch of "done" flags for this batch of transitions - values: np.ndarray - Batch of "old" value estimates for this batch of transitions - next_values : np.ndarray - Batch of "old" value-primes for this batch of transitions - probs: np.ndarray - Batch of "old" log-probs for this batch of transitions - - """ - self.state[ind, self.ptr] = state - self.action[ind, self.ptr] = action - - # These are actually not needed - self.next_state[ind, self.ptr] = next_state - self.reward[ind, self.ptr] = reward - self.not_done[ind, self.ptr] = (1. - done) - - # Values for losses - self.values[ind, self.ptr] = values - self.next_values[ind, self.ptr] = next_values - self.probs[ind, self.ptr] = probs - - self.mus[ind, self.ptr] = mus - self.stds[ind, self.ptr] = stds - - self.lens[ind] += 1 - - for j in range(len(ind)): - i = ind[j] - - if done[j]: - # Calculate the expected returns: the value function target - rew = self.reward[i, :self.ptr] - # rew = (rew - rew.mean()) / (rew.std() + 1.e-8) - self.ret[i, :self.ptr] = \ - self.discount_cumsum( - rew, self.gamma) - - # Calculate GAE-Lambda with this trick - # https://stackoverflow.com/a/47971187 - # TODO: make sure that this is actually correct - # TODO?: do it the usual way with a backwards loop - deltas = rew + \ - (self.gamma * self.next_values[i, :self.ptr] * - self.not_done[i, :self.ptr]) - \ - self.values[i, :self.ptr] - - if self.lmbda == 0: - self.adv[i, :self.ptr] = self.ret[i, :self.ptr] - \ - self.values[i, :self.ptr] - else: - self.adv[i, :self.ptr] = \ - self.discount_cumsum(deltas, self.gamma * self.lmbda) - - self.ptr += 1 - - def discount_cumsum(self, x, discount): - """ - # Taken from spinup implementation - magic from rllab for computing discounted cumulative sums of vectors. - input: - vector x, - [x0, - x1, - x2] - output: - [x0 + discount * x1 + discount^2 * x2, - x1 + discount * x2, - x2] - """ - return scipy.signal.lfilter( - [1], [1, float(-discount)], x[::-1], axis=0)[::-1] - - def sample( - self, - ) -> Tuple[ - torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor - ]: - """ Sample all transitions. - - Parameters: - ----------- - - Returns: - -------- - s: torch.Tensor - Sampled states - a: torch.Tensor - Sampled actions - ret: torch.Tensor - Sampled return estimate, target for V - adv: torch.Tensor - Sampled advantges, factor for policy update - probs: torch.Tensor - Sampled old action probabilities - """ - # TODO?: Not sample whole buffer ? Have M <= N*T ? - - # Generate indices - row, col = zip(*((i, le) - for i in range(len(self.lens)) - for le in range(self.lens[i]))) - - s, a, ret, adv, probs, mus, stds = ( - self.state[row, col], self.action[row, col], self.ret[row, col], - self.adv[row, col], self.probs[row, col], self.mus[row, col], - self.stds[row, col]) - - # Normalize advantage. Needed ? - # Trick used by OpenAI in their PPO impl - # adv = (adv - adv.mean()) / (adv.std() + 1.e-8) - - shuf_ind = np.arange(s.shape[0]) - - # Shuffling makes the learner unable to track in "two directions". - # Why ? - # np.random.shuffle(shuf_ind) - - self.clear_memory() - - return (s[shuf_ind], a[shuf_ind], ret[shuf_ind], adv[shuf_ind], - probs[shuf_ind], mus[shuf_ind], stds[shuf_ind]) - - def clear_memory(self): - """ Reset the buffer - """ - - self.lens = np.zeros((self.n_trajectories), dtype=np.int32) - self.ptr = 0 - - # RL Buffers "filled with zeros" - # TODO: Is that actually needed ? Can't just set self.ptr to 0 ? - self.state = np.zeros(( - self.n_trajectories, self.max_traj_length, self.state_dim)) - self.action = np.zeros(( - self.n_trajectories, self.max_traj_length, self.action_dim)) - self.next_state = np.zeros(( - self.n_trajectories, self.max_traj_length, self.state_dim)) - self.reward = np.zeros((self.n_trajectories, self.max_traj_length)) - self.not_done = np.zeros((self.n_trajectories, self.max_traj_length)) - self.values = np.zeros((self.n_trajectories, self.max_traj_length)) - self.next_values = np.zeros( - (self.n_trajectories, self.max_traj_length)) - self.probs = np.zeros((self.n_trajectories, self.max_traj_length)) - self.mus = np.zeros( - (self.n_trajectories, self.max_traj_length, self.action_dim)) - self.stds = np.zeros( - (self.n_trajectories, self.max_traj_length, self.action_dim)) - - # GAE buffers - self.ret = np.zeros((self.n_trajectories, self.max_traj_length)) - self.adv = np.zeros((self.n_trajectories, self.max_traj_length)) - - def __len__(self): - return np.sum(self.lens) - - def save_to_file(self, path): - """ TODO for imitation learning - """ - pass - - def load_from_file(self, path): - """ TODO for imitation learning - """ - pass - - class OffPolicyReplayBuffer(object): """ Replay buffer to store transitions. Implemented in a "ring-buffer" fashion. Efficiency could probably be improved @@ -292,12 +33,16 @@ def __init__( self.size = 0 # Buffers "filled with zeros" - self.state = np.zeros((self.max_size, state_dim), dtype=np.float32) - self.action = np.zeros((self.max_size, action_dim), dtype=np.float32) - self.next_state = np.zeros( - (self.max_size, state_dim), dtype=np.float32) - self.reward = np.zeros((self.max_size, 1), dtype=np.float32) - self.not_done = np.zeros((self.max_size, 1), dtype=np.float32) + self.state = torch.zeros( + (self.max_size, state_dim), dtype=torch.float32).pin_memory() + self.action = torch.zeros( + (self.max_size, action_dim), dtype=torch.float32).pin_memory() + self.next_state = torch.zeros( + (self.max_size, state_dim), dtype=torch.float32).pin_memory() + self.reward = torch.zeros( + (self.max_size, 1), dtype=torch.float32).pin_memory() + self.not_done = torch.zeros( + (self.max_size, 1), dtype=torch.float32).pin_memory() def add( self, @@ -334,6 +79,9 @@ def add( self.ptr = (self.ptr + len(ind)) % self.max_size self.size = min(self.size + len(ind), self.max_size) + def __len__(self): + return self.size + def sample( self, batch_size=4096 @@ -341,8 +89,7 @@ def sample( torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor ]: """ Off-policy sampling. Will sample min(batch_size, self.size) - transitions in an unordered way. This removes the ability to do - GAE and reward discounting after the transitions are sampled. + transitions in an unordered way. Parameters: ----------- @@ -362,24 +109,22 @@ def sample( d: torch.Tensor Sampled 1-done flags """ - - ind = np.random.randint(0, self.size, size=int(batch_size)) - - s = torch.as_tensor( - self.state[ind], dtype=torch.float32, device=self.device) - a = torch.as_tensor( - self.action[ind], dtype=torch.float32, device=self.device) - ns = \ - torch.as_tensor( - self.next_state[ind], dtype=torch.float32, device=self.device) - r = torch.as_tensor( - self.reward[ind], dtype=torch.float32, device=self.device - ).squeeze(-1) - d = torch.as_tensor( - self.not_done[ind], dtype=torch.float32, device=self.device - ).squeeze(-1) - - return s, a, ns, r, d + ind = torch.randperm(self.size, dtype=torch.long)[ + :min(self.size, batch_size)] + + s = self.state.index_select(0, ind).pin_memory() + a = self.action.index_select(0, ind).pin_memory() + ns = self.next_state.index_select(0, ind).pin_memory() + r = self.reward.index_select(0, ind).squeeze(-1).pin_memory() + d = self.not_done.index_select(0, ind).to( + dtype=torch.float32).squeeze(-1).pin_memory() + + # Return tensors on the same device as the buffer in pinned memory + return (s.to(device=self.device, non_blocking=True), + a.to(device=self.device, non_blocking=True), + ns.to(device=self.device, non_blocking=True), + r.to(device=self.device, non_blocking=True), + d.to(device=self.device, non_blocking=True)) def clear_memory(self): """ Reset the buffer diff --git a/TrackToLearn/algorithms/shared/utils.py b/TrackToLearn/algorithms/shared/utils.py index 8613ec5..5bbf1ac 100644 --- a/TrackToLearn/algorithms/shared/utils.py +++ b/TrackToLearn/algorithms/shared/utils.py @@ -1,4 +1,5 @@ import numpy as np +import torch from torch import nn @@ -12,7 +13,13 @@ def add_to_means(means, dic): def mean_losses(dic): - return {k: np.mean(dic[k]) for k in dic.keys()} + new_dict = {k: np.mean(torch.stack(dic[k]).cpu().numpy(), axis=0) + for k in dic.keys()} + return new_dict + + +def mean_rewards(dic): + return {k: np.mean(np.asarray(dic[k]), axis=0) for k in dic.keys()} def harvest_states(i, *args): @@ -28,7 +35,7 @@ def stack_states(full, single): def format_widths(widths_str): - return [int(i) for i in widths_str.split('-')] + return np.asarray([int(i) for i in widths_str.split('-')]) def make_fc_network( diff --git a/TrackToLearn/algorithms/td3.py b/TrackToLearn/algorithms/td3.py index 8ef7e5a..f3c25a6 100644 --- a/TrackToLearn/algorithms/td3.py +++ b/TrackToLearn/algorithms/td3.py @@ -36,6 +36,8 @@ def __init__( lr: float = 3e-4, gamma: float = 0.99, n_actors: int = 4096, + batch_size: int = 2**12, + replay_size: int = 1e6, rng: np.random.RandomState = None, device: torch.device = "cuda:0", ): @@ -69,37 +71,39 @@ def __init__( self.gamma = gamma # Initialize main policy - self.policy = TD3ActorCritic( + self.agent = TD3ActorCritic( input_size, action_size, hidden_dims, device, ) # Initialize target policy to provide baseline - self.target = copy.deepcopy(self.policy) + self.target = copy.deepcopy(self.agent) # DDPG requires a different model for actors and critics # Optimizer for actor self.actor_optimizer = torch.optim.Adam( - self.policy.actor.parameters(), lr=lr) + self.agent.actor.parameters(), lr=lr) # Optimizer for critic self.critic_optimizer = torch.optim.Adam( - self.policy.critic.parameters(), lr=lr) + self.agent.critic.parameters(), lr=lr) # TD3-specific parameters self.action_std = action_std self.max_action = 1. self.noise_clip = 1. - self.policy_freq = 2 + self.agent_freq = 2 # Off-policy parameters self.on_policy = False self.start_timesteps = 1000 self.total_it = 0 self.tau = 0.005 + self.batch_size = batch_size + self.replay_size = replay_size # Replay buffer self.replay_buffer = OffPolicyReplayBuffer( - input_size, action_size) + input_size, action_size, max_size=replay_size) self.t = 1 self.rng = rng @@ -114,7 +118,7 @@ def sample_action( """ # Select action according to policy + noise for exploration - a = self.policy.select_action(state) + a = self.agent.select_action(state) action = ( a + self.rng.normal( 0, self.max_action * self.action_std, @@ -125,8 +129,7 @@ def sample_action( def update( self, - replay_buffer: OffPolicyReplayBuffer, - batch_size: int = 2**12 + batch, ) -> Tuple[float, float]: """ TD3 improves upon DDPG with three additions: @@ -153,7 +156,7 @@ def update( # Sample replay buffer state, action, next_state, reward, not_done = \ - replay_buffer.sample(batch_size) + batch with torch.no_grad(): # Select next action according to policy and add clipped noise @@ -171,7 +174,7 @@ def update( target_Q = reward + not_done * self.gamma * target_Q # Get current Q estimates for s - current_Q1, current_Q2 = self.policy.critic( + current_Q1, current_Q2 = self.agent.critic( state, action) # Compute critic loss Q(s,a) - r + yQ(s',a) @@ -195,11 +198,11 @@ def update( self.critic_optimizer.step() # Delayed policy updates - if self.total_it % self.policy_freq == 0: + if self.total_it % self.agent_freq == 0: # Compute actor loss -Q(s,a) - actor_loss = -self.policy.critic.Q1( - state, self.policy.actor(state)).mean() + actor_loss = -self.agent.critic.Q1( + state, self.agent.actor(state)).mean() losses.update({'actor_loss': actor_loss.item()}) @@ -210,14 +213,14 @@ def update( # Update the frozen target models for param, target_param in zip( - self.policy.critic.parameters(), + self.agent.critic.parameters(), self.target.critic.parameters() ): target_param.data.copy_( self.tau * param.data + (1 - self.tau) * target_param.data) for param, target_param in zip( - self.policy.actor.parameters(), + self.agent.actor.parameters(), self.target.actor.parameters() ): target_param.data.copy_( diff --git a/TrackToLearn/algorithms/trpo.py b/TrackToLearn/algorithms/trpo.py deleted file mode 100644 index 68c124e..0000000 --- a/TrackToLearn/algorithms/trpo.py +++ /dev/null @@ -1,398 +0,0 @@ -import numpy as np -import torch - -from collections import defaultdict -from torch.distributions import Normal, kl_divergence -from typing import Tuple - -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.algorithms.shared.onpolicy import ActorCritic -from TrackToLearn.algorithms.shared.replay import ReplayBuffer -from TrackToLearn.algorithms.shared.utils import ( - add_item_to_means, mean_losses) - - -# From ikostrikov's impl -def get_flat_params_from(model): - params = [] - for param in model.parameters(): - params.append(param.data.view(-1)) - - flat_params = torch.cat(params) - return flat_params - - -# From ikostrikov's impl -def set_flat_params_to(model, flat_params): - prev_ind = 0 - for param in model.parameters(): - flat_size = int(np.prod(list(param.size()))) - param.data.copy_( - flat_params[prev_ind:prev_ind + flat_size].view(param.size())) - prev_ind += flat_size - - -def get_flat_grads(loss, params, create_graph=False, retain_graph=True): - grads = torch.autograd.grad( - loss, params, create_graph=create_graph, retain_graph=retain_graph) - flat_grads = torch.cat([grad.view(-1) for grad in grads]) - return flat_grads - - -# TODO : ADD TYPES AND DESCRIPTION - - -class TRPO(A2C): - """ - The sample-gathering and training algorithm. - Based on: - - Schulman, J., Levine, S., Abbeel, P., Jordan, M., & Moritz, P. (2015, June). - Trust region policy optimization. In International conference on machine - learning (pp. 1889-1897). PMLR. - - Implementation is based on - - https://github.com/openai/spinningup/blob/master/spinup/algos/tf1/trpo/trpo.py # noqa E501 - - https://github.com/ikostrikov/pytorch-trpo/blob/master/trpo.py - - https://github.com/ajlangley/trpo-pytorch - - https://github.com/mjacar/pytorch-trpo/blob/master/trpo_agent.py - - Some alterations have been made to the algorithms so it could be fitted to the - tractography problem. - - """ - - def __init__( - self, - input_size: int, - action_size: int, - hidden_dims: int, - action_std: float = 0.0, - lr: float = 3e-4, - gamma: float = 0.99, - lmbda: float = 0.99, - entropy_loss_coeff: float = 0.01, - delta: float = 0.01, - max_backtracks: int = 10, - backtrack_coeff: float = 0.05, - K_epochs: int = 1, - max_traj_length: int = 1, - n_actors: int = 4096, - rng: np.random.RandomState = None, - device: torch.device = "cuda:0", - ): - """ - Parameters - ---------- - input_size: int - Input size for the model - action_size: int - Output size for the actor - hidden_dims: str - Widths and layers of the NNs - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - lmbda: float - Lambda parameter for Generalized Advantage Estimation (GAE): - John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan: - “High-Dimensional Continuous Control Using Generalized - Advantage Estimation”, 2015; - http://arxiv.org/abs/1506.02438 arXiv:1506.02438 - entropy_loss_coeff: float - Entropy bonus for the actor loss - delta: float - Hyperparameter for KFAC. Controls the "distance" between - the new and old policies. - max_backtracks: int - Maximum number of steps to do during line search - backtrack_coeff: float - Size of step during line search - max_traj_length: int - Maximum trajectory length to store in memory. - K_epochs: int - Number of times to update on the same batch. - n_actors: int - Number of learners - rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - """ - - self.input_size = input_size - self.action_size = action_size - - self.lr = lr - self.gamma = gamma - - self.on_policy = True - - # Declare policy - self.policy = ActorCritic( - input_size, action_size, hidden_dims, device, - ).to(device) - - # Note the optimizer is ran on the target network's params - # TRPO: TRPO may use LGBFS optimization for the value function. - # Kinda special to TRPO - # self.optimizer = torch.optim.LBFGS( - # self.policy.critic.parameters(), lr=lr, max_iter=25) - - self.optimizer = torch.optim.Adam( - self.policy.critic.parameters(), lr=lr) - - # TRPO Specific parameters - self.lmbda = lmbda - self.entropy_loss_coeff = entropy_loss_coeff - self.max_backtracks = max_backtracks - - self.backtrack_coeff = backtrack_coeff - self.damping = 0.01 - self.delta = delta - - self.max_traj_length = max_traj_length - self.K_epochs = K_epochs - - self.max_action = 1. - self.t = 1 - self.device = device - self.n_actors = n_actors - - # Replay buffer - self.replay_buffer = ReplayBuffer( - input_size, action_size, n_actors, - max_traj_length, self.gamma, self.lmbda) - - self.rng = rng - - def update( - self, - replay_buffer, - batch_size=8192, - ) -> Tuple[float, float]: - """ - Policy update function, where we want to maximize the probability of - good actions and minimize the probability of bad actions - - The general idea is to compare the current policy and the target - policies. To do so, the "ratio" is calculated by comparing the - probabilities of actions for both policies. The ratio is then - multiplied by the "advantage", which is how better than average - the policy performs. - - Therefore: - - actions with a high probability and positive advantage will - be made a lot more likely - - actions with a low probabiliy and positive advantage will be made - more likely - - actions with a high probability and negative advantage will be - made a lot less likely - - actions with a low probabiliy and negative advantage will be made - less likely - - TRPO adds a twist to this where, since the advantage estimation is done - with your (potentially bad) networks, a "pessimistic view" is used - where gains will be clamped, so that high gradients (for very probable - or with a high-amplitude advantage) are tamed. This is to prevent your - network from diverging too much in the early stages - - Parameters - ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - - Returns - ------- - losses: dict - Dict. containing losses and training-related metrics. - """ - - # Sample replay buffer - s, a, ret, adv, p, mu, std = \ - replay_buffer.sample() - - running_losses = defaultdict(list) - - for i in range(0, len(s), batch_size): - j = i + batch_size - - state = torch.FloatTensor(s[i:j]).to(self.device) - action = torch.FloatTensor(a[i:j]).to(self.device) - returns = torch.FloatTensor(ret[i:j]).to(self.device) - advantage = torch.FloatTensor(adv[i:j]).to(self.device) - old_prob = torch.FloatTensor(p[i:j]).to(self.device) - old_mu = torch.FloatTensor(mu[i:j]).to(self.device) - old_std = torch.FloatTensor(std[i:j]).to(self.device) - - # Here be dragons - - def get_kl(): - - def kl(mu, std): - return kl_divergence( - Normal(old_mu.detach(), old_std.detach()), - Normal(mu, std)).mean() - - return kl - - def get_loss(): - def loss(policy): - _, logprob, entropy, mu, std = policy.evaluate( - state, - action) - - ratio = torch.exp(logprob - old_prob) - policy_loss = (-advantage * ratio).mean() - # TRPO "pessimistic" policy loss - # Entropy "loss" to promote entropy in the policy - entropy_loss = -self.entropy_loss_coeff * entropy.mean() - actor_loss = policy_loss + entropy_loss - return actor_loss, mu, std, entropy_loss - - return loss - - def get_hessian(kl): - """ Compute Hx, in a flattened version - x is the grad of the actor loss - """ - - flat_grad_kl = get_flat_grads( - kl, self.policy.actor.parameters(), create_graph=True) - - def Hx(x): - - kl_v = (flat_grad_kl @ x.clone()) - flat_grad_grad_kl = get_flat_grads( - kl_v, self.policy.actor.parameters()) - - return flat_grad_grad_kl.detach() + (self.damping * x) - - return Hx - - def compute_conjugate_gradients(b, Hx, nsteps=10): - """ Compute conjugate gradient of the actor loss gradient - https://en.wikipedia.org/wiki/Conjugate_gradient_method#The_resulting_algorithm # noqaE501 - """ - x = torch.zeros(b.size(), device=self.device) - p = b.clone() - r = b.clone() # - Ax, but Ax = 0 with x = 0 - rr = torch.dot(r, r) - for i in range(nsteps): - Ap = Hx(p) - alpha = rr / (torch.dot(p, Ap) + 1e-8) - x += alpha * p - r -= alpha * Ap - rr_p = torch.dot(r, r) - if rr_p < 1e-10: - break - p = r + (rr_p / rr) * p - rr = rr_p - return x - - def get_step(g, Hx, delta): - return torch.sqrt(2 * delta / torch.matmul(g, Hx(g))) - - def linesearch(step, kl, old_params, old_loss): - # to start backtrack at 1. - step_size = 1. / self.backtrack_coeff - - for i in np.arange(self.max_backtracks): - step_size *= self.backtrack_coeff - new_params = old_params + (step * step_size) - set_flat_params_to(self.policy.actor, new_params) - with torch.no_grad(): - pi_loss, mu, std, entropy = loss_fn(self.policy) - kl_mean = kl(mu, std) - expected_improve = expected * step_size - actual_improvement = old_loss - pi_loss - ratio = actual_improvement / expected_improve - - # set_flat_params_to(self.policy.actor, old_params) - kl_cond = kl_mean <= self.delta - ratio_cond = ratio > 0.1 - improve_cond = actual_improvement > 0. - if kl_cond and ratio_cond and improve_cond: - # print('Found suitable step', step_size) - # print('Improv', ratio) - return pi_loss, step_size, kl_mean, entropy - print('Linesearch failed', ratio, kl_mean) - return old_loss, step_size, kl_mean, entropy - - loss_fn = get_loss() - - actor_loss, old_mu, old_std, entropy = loss_fn(self.policy) - kl = get_kl() - kl_mean = kl(old_mu, old_std) - loss_grad = get_flat_grads( - actor_loss, self.policy.actor.parameters()) - - Hx = get_hessian(kl_mean) - - # OpenAI baseline update - step = compute_conjugate_gradients(-loss_grad, Hx) - max_step_coeff = (2 * self.delta / (step @ Hx(step)))**(0.5) - max_trpo_step = max_step_coeff * step - - # shs = 0.5 * torch.matmul(g, Hx(g)) - # lm = torch.sqrt(shs / self.delta) - # max_step = g / lm - - expected = -loss_grad @ max_trpo_step - - old_params = get_flat_params_from(self.policy.actor) - actor_loss, step_size, kl_mean, entropy = linesearch( - max_trpo_step, kl, old_params, actor_loss) - - set_flat_params_to( - self.policy.actor, old_params + (max_trpo_step * step_size)) - - # TODO?: Iterate on all data before K ? - for _ in range(self.K_epochs): - - # To use with LGBFS - - # def critic_step(): - # # V_pi'(s) and pi'(a|s) - # v_s, *_ = self.policy.evaluate( - # state, - # action) - # # TRPO critic loss - # critic_loss = ((returns - v_s) ** 2).mean() - # # Critic gradient step - # self.optimizer.zero_grad() - # critic_loss.backward() - # return critic_loss - - # self.optimizer.step(critic_step) - - v, *_ = self.policy.evaluate( - state, - action) - - # TRPO critic loss - critic_loss = ((returns - v) ** 2).mean() - - # Critic gradient step - self.optimizer.zero_grad() - critic_loss.backward() - - self.optimizer.step() - - # TODO: Better loss and metric logging - losses = {'actor_loss': actor_loss.item(), - 'critic_loss': critic_loss.item(), - 'advantage': advantage.mean().item(), - 'step_size': step_size, - 'max_trpo_step': max_trpo_step.mean().item(), - 'returns': returns.mean().item(), - 'adv': advantage.mean().item(), - 'v': v.mean().item(), - 'entropy': entropy.item(), - 'ret': returns.mean().item(), - 'kl_mean': kl_mean.item()} - - running_losses = add_item_to_means(running_losses, losses) - - return mean_losses(running_losses) diff --git a/TrackToLearn/algorithms/vpg.py b/TrackToLearn/algorithms/vpg.py deleted file mode 100644 index 3df697f..0000000 --- a/TrackToLearn/algorithms/vpg.py +++ /dev/null @@ -1,246 +0,0 @@ -import numpy as np -import torch - -from collections import defaultdict - -from typing import Tuple - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.algorithms.shared.onpolicy import PolicyGradient -from TrackToLearn.algorithms.shared.replay import ReplayBuffer -from TrackToLearn.algorithms.shared.utils import ( - add_item_to_means, mean_losses) -from TrackToLearn.environments.env import BaseEnv - - -class VPG(RLAlgorithm): - """ - The sample-gathering and training algorithm. - Based on: - - Sutton, R. S., McAllester, D., Singh, S., & Mansour, Y. (1999). - Policy gradient methods for reinforcement learning with function - approximation. Advances in neural information processing systems, 12. - - Ratio and clipping were removed from PPO to obtain VPG. - - Some alterations have been made to the algorithms so it could be fitted to - the tractography problem. - - """ - - def __init__( - self, - input_size: int, - action_size: int, - hidden_dims: str, - action_std: float = 0.0, - lr: float = 3e-4, - gamma: float = 0.99, - entropy_loss_coeff: float = 0.0001, - max_traj_length: int = 1, - n_actors: int = 4096, - rng: np.random.RandomState = None, - device: torch.device = "cuda:0", - ): - """ - Parameters - ---------- - input_size: int - Input size for the model - action_size: int - Output size for the actor - hidden_dims: str - Widths and layers of the NNs - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - entropy_loss_coeff: float - Entropy bonus for the actor loss - max_traj_length: int - Maximum trajectory length to store in memory. - n_actors: int - Number of learners - rng: np.random.RandomState - rng for randomness. Should be fixed with a seed - device: torch.device, - Device to use for processing (CPU or GPU) - """ - - self.input_size = input_size - self.action_size = action_size - self.lr = lr - self.gamma = gamma - self.max_traj_length = max_traj_length - self.entropy_loss_coeff = entropy_loss_coeff - - # Declare policy - self.policy = PolicyGradient( - input_size, action_size, hidden_dims, device, action_std - ).to(device) - - # Optimizer for actor - self.optimizer = torch.optim.Adam( - self.policy.parameters(), lr=lr) - - # Replay buffer - self.replay_buffer = ReplayBuffer( - input_size, action_size, n_actors, max_traj_length, - self.gamma, lmbda=0.) - - self.on_policy = True - self.max_action = 1. - self.t = 1 - - self.n_actors = n_actors - self.rng = rng - self.device = device - - def _episode( - self, - initial_state: np.ndarray, - env: BaseEnv, - ) -> Tuple[float, float, float, int]: - """ - Main loop for the algorithm - From a starting state, run the model until the env. says its done - Gather transitions and train on them according to the RL algorithm's - rules. - - Parameters - ---------- - initial_state: np.ndarray - Initial state of the environment - env: BaseEnv - The environment actions are applied on. Provides the state fed to - the RL algorithm - - Returns - ------- - running_reward: float - Cummulative training steps reward - actor_loss: float - Policty gradient loss of actor - critic_loss: float - MSE loss of critic - episode_length: int - Length of episode aka how many transitions were gathered - """ - - running_reward = 0 - state = initial_state - done = False - running_losses = defaultdict(list) - - episode_length = 0 - indices = np.asarray(range(state.shape[0])) - - while not np.all(done): - - # Select action according to policy - # Noise is already added by the policy - action = self.policy.select_action( - state, stochastic=True) - - self.t += action.shape[0] - - v, prob, _, mu, std = self.policy.get_evaluation( - state, - action) - - # Perform action - next_state, reward, done, _ = env.step(action) - - vp, *_ = self.policy.get_evaluation( - next_state, - action) - - # Set next state as current state - running_reward += sum(reward) - - # Store data in replay buffer - self.replay_buffer.add( - indices, state.cpu().numpy(), action, next_state.cpu().numpy(), - reward, done, v, vp, prob, mu, std) - - # "Harvesting" here means removing "done" trajectories - # from state as well as removing the associated streamlines - state, idx = env.harvest(next_state) - - indices = indices[idx] - - # Keeping track of episode length - episode_length += 1 - - losses = self.update( - self.replay_buffer) - running_losses = add_item_to_means(running_losses, losses) - - return ( - running_reward, - running_losses, - episode_length) - - def update( - self, - replay_buffer, - batch_size=4096 - ) -> Tuple[float, float]: - """ - Policy update function, where we want to maximize the probability of - good actions and minimize the probability of bad actions - - Therefore: - - actions with a high probability and positive advantage will - be made a lot more likely - - actions with a low probabiliy and positive advantage will be made - more likely - - actions with a high probability and negative advantage will be - made a lot less likely - - actions with a low probabiliy and negative advantage will be made - less likely - - Parameters - ---------- - replay_buffer: ReplayBuffer - Replay buffer that contains transitions - batch_size: int - Batch size to update the actor - - Returns - ------- - losses: dict - Dict. containing losses and training-related metrics. - """ - - # Sample replay buffer - s, a, ret, *_ = \ - replay_buffer.sample() - - running_losses = defaultdict(list) - - for i in range(0, len(s), min(len(s), batch_size)): - j = i + batch_size - state = torch.FloatTensor(s[i:j]).to(self.device) - action = torch.FloatTensor(a[i:j]).to(self.device) - returns = torch.FloatTensor(ret[i:j]).to(self.device) - - log_prob, entropy, *_ = self.policy.evaluate(state, action) - - # VPG policy loss - actor_loss = -(log_prob * returns).mean() + \ - -self.entropy_loss_coeff * entropy.mean() - - losses = {'actor_loss': actor_loss.item(), - 'returns': returns.mean().item(), - 'entropy': entropy.mean().item()} - - running_losses = add_item_to_means(running_losses, losses) - - # Gradient step - self.optimizer.zero_grad() - actor_loss.backward() - self.optimizer.step() - - return mean_losses(running_losses) diff --git a/TrackToLearn/datasets/GymDataset.py b/TrackToLearn/datasets/GymDataset.py deleted file mode 100644 index d858573..0000000 --- a/TrackToLearn/datasets/GymDataset.py +++ /dev/null @@ -1,59 +0,0 @@ -import d4rl -import gym -import torch - -import numpy as np - -from torch.utils.data import Dataset - - -class GymDataset(Dataset): - """ - class that loads hdf5 dataset object - """ - - def __init__( - self, env_name: str): - """ - Args: - """ - self.env_name = env_name - self.data = self.get_dataset(env_name) - - def get_dataset(self, env_name): - env = gym.make(env_name).unwrapped - dataset = d4rl.qlearning_dataset(env) - return dict( - states=dataset['observations'], - actions=dataset['actions'], - next_states=dataset['next_observations'], - rewards=dataset['rewards'], - dones=dataset['terminals'].astype(np.float32), - ) - return dataset - - def get_one_input(self): - - return self.data['states'][0] - - def __getitem__(self, index): - """This method loads, transforms and returns slice corresponding to the - corresponding index. - :arg - index: the index of the slice within patient data - :return - A tuple (input, target) - """ - states = self.data['states'][index][None, ...] - actions = self.data['actions'][index][None, ...] - rewards = np.array([self.data['rewards'][index]]) - next_states = self.data['next_states'][index][None, ...] - dones = np.array([self.data['dones'][index]], dtype=float) - states, actions, rewards, next_states, dones = map(torch.from_numpy, [states, actions, rewards, next_states, dones]) - return states, actions, rewards, next_states, dones - - def __len__(self): - """ - return the length of the dataset - """ - return int(len(self.data['states'])) diff --git a/TrackToLearn/datasets/StreamlineDataset.py b/TrackToLearn/datasets/StreamlineDataset.py deleted file mode 100644 index 3457176..0000000 --- a/TrackToLearn/datasets/StreamlineDataset.py +++ /dev/null @@ -1,251 +0,0 @@ -import h5py -import numpy as np -import torch - -from collections import defaultdict - -from dwi_ml.data.processing.space.world_to_vox import convert_world_to_vox -from nibabel.streamlines import Tractogram -from torch.utils.data import Dataset - -from TrackToLearn.datasets.utils import SubjectData -from TrackToLearn.environments.utils import ( - get_neighborhood_directions, format_state) -from TrackToLearn.environments.reward import ( - reward_streamlines_step, -) -from TrackToLearn.utils.utils import normalize_vectors - -device = "cpu" - - -class StreamlineDataset(Dataset): - """ - class that loads hdf5 dataset object - """ - - def __init__( - self, file_path: str, dataset_split: str, n_dirs=1, - add_neighborhood=True, dense_rewards=False, reward_scaling=1.0, - reward_shift=0.0, noise=0.0, device=None - ): - """ - Args: - """ - self.file_path = file_path - self.split = dataset_split - self.n_dirs = n_dirs - self.add_neighborhood = add_neighborhood - self.dense_rewards = dense_rewards - self.reward_scaling = reward_scaling - self.reward_shift = reward_shift - self.local_reward = False - self.noise = noise - with h5py.File(self.file_path, 'r') as f: - self.normalize = f.attrs['normalize'] - self.step_size = float(f.attrs['step_size']) - self.subject_list = list(f[dataset_split].keys()) - self.indexes, self.rev_indexes, self.lengths = \ - self._build_indexes(f, dataset_split) - self.state_size = self._compute_state_size(f) - - # print(self.dense_rewards) - - def _build_indexes(self, dataset_file, split): - """ - """ - print('Building indexes') - set_list = list() - lengths = [] - rev_index = defaultdict(list) - - split_set = dataset_file[split] - for subject in list(split_set.keys()): - if subject != 'transitions': - streamlines = SubjectData.from_hdf_subject( - split_set, subject).sft.streamlines - for i in range(len(streamlines)): - k = (subject, i) - rev_index[subject].append((len(set_list), i)) - - set_list.append(k) - lengths.extend(streamlines._lengths) - - print('Done') - return set_list, rev_index, lengths - - @property - def archives(self): - if not hasattr(self, 'f'): - self.f = h5py.File(self.file_path, 'r') - return self.f - - def _compute_state_size(self, f): - subject, strml_idx = self.indexes[0] - subject_data = SubjectData.from_hdf_subject(f[self.split], subject) - data_volume = subject_data.input_dv.data - - signal_shape = data_volume.data.shape[-1] - - if self.add_neighborhood: - signal_shape *= 7 - - signal_shape += (3 * self.n_dirs) - return signal_shape - - def get_one_input(self): - - state_0, *_ = self[0] - self.f.close() - del self.f - return state_0[0] - - def __getitem__(self, index): - """This method loads, transforms and returns slice corresponding to the - corresponding index. - :arg - index: the index of the slice within patient data - :return - A tuple (input, target) - """ - # return index - - # Map streamline total index -> subject.streamline_id - subject, strml_idx = self.indexes[index] - f = self.archives[self.split] - subject_data = SubjectData.from_hdf_subject(f, subject) - sft = subject_data.sft.as_sft(strml_idx) - sft.to_vox() - streamline = sft.streamlines[0] - - if self.noise > 0.0: - streamline = streamline + np.random.normal( - loc=0.0, scale=self.noise, size=streamline.shape) - - data_volume = torch.from_numpy( - subject_data.input_dv.data) - - # Compute neighborhood positions to add to all streamline chunks, in - # voxel space. - input_dv_affine_vox2rasmm = subject_data.input_dv.affine_vox2rasmm - if self.add_neighborhood: - step_size_vox = convert_world_to_vox( - self.step_size, - input_dv_affine_vox2rasmm) - neighborhood_directions = torch.tensor( - get_neighborhood_directions( - radius=step_size_vox), - dtype=torch.float16) - - max_len = len(streamline) - streamlines = np.tile(streamline[0], (max_len, max_len, 1)) - for i in range(1, max_len + 1): - streamlines[i-1, -i:, :] = streamline[:i, :] - - # self.render(subject_data.peaks, streamlines) - - signals = torch.from_numpy(format_state( - streamlines, - data_volume, - step_size_vox, - neighborhood_directions, - 1, - self.n_dirs, - device).astype(np.float32)) - - states = signals[:-1] - next_states = signals[1:] - - directions = np.diff(streamline, axis=0) - - actions = torch.from_numpy(directions.astype(np.float32)) - - if self.local_reward: - reward = torch.from_numpy(reward_streamlines_step( - streamlines, - subject_data.peaks, - subject_data.csf, - subject_data.gm, - 200, - 60, - 2, - 1.0, - 0.0, - 0.0, - 0.0, - 0.0, - 0.0, - subject_data.input_dv.affine_vox2rasmm - ).astype(np.float32)) - - elif self.dense_rewards: - reward = torch.from_numpy( - np.repeat(subject_data.rewards[strml_idx], - states.shape[0]).astype(np.float32)) - reward *= self.reward_scaling - else: - reward = torch.zeros(states.shape[0], dtype=torch.float32) - - reward[-1] = subject_data.rewards[strml_idx] * self.reward_scaling - reward += self.reward_shift - - dones = torch.zeros((reward.shape), dtype=torch.float32) - dones[-1] = 1. - - assert len(states) == len(actions), (len( - states), len(actions), len(streamline)) - return states, actions, reward, next_states, dones - - def __len__(self): - """ - return the length of the dataset - """ - return int(len(self.indexes)) - - def render( - self, - peaks, - streamline - ): - """ Debug function - - Parameters: - ----------- - tractogram: Tractogram, optional - Object containing the streamlines and seeds - path: str, optional - If set, save the image at the specified location instead - of displaying directly - """ - from fury import window, actor - # Might be rendering from outside the environment - tractogram = Tractogram( - streamlines=streamline, - data_per_streamline={ - 'seeds': streamline[:, 0, :] - }) - - # Reshape peaks for displaying - X, Y, Z, M = peaks.data.shape - peaks = np.reshape(peaks.data, (X, Y, Z, 5, M//5)) - - # Setup scene and actors - scene = window.Scene() - - stream_actor = actor.streamtube(tractogram.streamlines) - peak_actor = actor.peak_slicer(peaks, - np.ones((X, Y, Z, M)), - colors=(0.2, 0.2, 1.), - opacity=0.5) - dot_actor = actor.dots(tractogram.data_per_streamline['seeds'], - color=(1, 1, 1), - opacity=1, - dot_size=2.5) - scene.add(stream_actor) - scene.add(peak_actor) - scene.add(dot_actor) - scene.reset_camera_tight(0.95) - - showm = window.ShowManager(scene, reset_camera=True) - showm.initialize() - showm.start() diff --git a/TrackToLearn/datasets/SubjectDataset.py b/TrackToLearn/datasets/SubjectDataset.py new file mode 100644 index 0000000..a47cdfe --- /dev/null +++ b/TrackToLearn/datasets/SubjectDataset.py @@ -0,0 +1,62 @@ +import h5py + +from torch.utils.data import Dataset + +from TrackToLearn.datasets.utils import SubjectData + +device = "cpu" + + +class SubjectDataset(Dataset): + """ + + """ + + def __init__( + self, file_path: str, dataset_split: str, + ): + """ + Args: + """ + self.file_path = file_path + self.split = dataset_split + with h5py.File(self.file_path, 'r') as f: + self.subjects = list(f[dataset_split].keys()) + + @property + def archives(self): + if not hasattr(self, 'f'): + self.f = h5py.File(self.file_path, 'r')[self.split] + return self.f + + def __getitem__(self, index): + """ + """ + + # return index + subject_id = self.subjects[index] + + tracto_data = SubjectData.from_hdf_subject( + self.archives, subject_id) + + tracto_data.input_dv.subject_id = subject_id + input_volume = tracto_data.input_dv + + # Load peaks for reward + peaks = tracto_data.peaks + + # Load tracking mask + tracking_mask = tracto_data.tracking + + seeding = tracto_data.seeding + + reference = tracto_data.reference + + return (subject_id, input_volume, tracking_mask, + seeding, peaks, reference) + + def __len__(self): + """ + return the length of the dataset + """ + return len(self.subjects) diff --git a/TrackToLearn/datasets/create_dataset.py b/TrackToLearn/datasets/create_dataset.py index 1311b10..8ce8606 100644 --- a/TrackToLearn/datasets/create_dataset.py +++ b/TrackToLearn/datasets/create_dataset.py @@ -12,7 +12,6 @@ from nibabel.nifti1 import Nifti1Image from scilpy.io.utils import add_sh_basis_args -from TrackToLearn.datasets.processing import min_max_normalize_data_volume from TrackToLearn.utils.utils import ( Timer) @@ -25,84 +24,42 @@ """ -def parse_args(): - - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - parser.add_argument('path', type=str, - help='Location of the dataset files.') - parser.add_argument('config_file', type=str, - help="Configuration file to load subjects and their" - " volumes.") - parser.add_argument('output', type=str, - help="Output filename including path") - parser.add_argument('--normalize', action='store_true', - help='If set, normalize first input signal.') - - basis_group = parser.add_argument_group('Basis options') - add_sh_basis_args(basis_group) - - arguments = parser.parse_args() - if arguments.sh_basis == 'tournier07': - parser.error('Only descoteaux07 basis is supported') - return arguments - - -def main(): - """ Parse args, generate dataset and save it on disk """ - args = parse_args() - - with Timer("Generating dataset", newline=True): - generate_dataset(path=args.path, - config_file=args.config_file, - output=args.output, - normalize=args.normalize) - - def generate_dataset( - path: str, config_file: str, output: str, - normalize: bool = False, ) -> None: """ Generate a dataset Args: config_file: output: - normalize: """ - dataset_name = output - - # Clean existing processed files - dataset_file = "{}.hdf5".format(dataset_name) - # Initialize database - with h5py.File(dataset_file, 'w') as hdf_file: + with h5py.File(output, 'w') as hdf_file: # Save version hdf_file.attrs['version'] = 2 - hdf_file.attrs['normalize'] = normalize is True - with open(join(path, config_file), "r") as conf: + with open(join(config_file), "r") as conf: config = json.load(conf) - + print("Processing training subjects") add_subjects_to_hdf5( - path, config, hdf_file, "training", normalize) + config, hdf_file, "training") + print("Processing validation subjects") add_subjects_to_hdf5( - path, config, hdf_file, "validation", normalize) + config, hdf_file, "validation") + print("Processing test subjects") add_subjects_to_hdf5( - path, config, hdf_file, "testing", normalize) + config, hdf_file, "testing") - print("Saved dataset : {}".format(dataset_file)) + print("Saved dataset : {}".format(output)) def add_subjects_to_hdf5( - path, config, hdf_file, dataset_split, normalize, + config, hdf_file, dataset_split, ): """ @@ -110,7 +67,6 @@ def add_subjects_to_hdf5( config: hdf_file: dataset_split: - normalize: """ @@ -124,83 +80,70 @@ def add_subjects_to_hdf5( subject_config = config[dataset_split][subject_id] hdf_subject = hdf_split.create_group(subject_id) - add_subject_to_hdf5(path, subject_config, hdf_subject, normalize) + add_subject_to_hdf5(subject_config, hdf_subject) def add_subject_to_hdf5( - path, config, hdf_subject, normalize, + config, hdf_subject, ): """ Args: config: hdf_subject: - normalize: """ input_files = config['inputs'] peaks_file = config['peaks'] - wm_file = config['wm'] - gm_file = config['gm'] - csf_file = config['csf'] - interface_file = config['interface'] - include_file = config['include'] - exclude_file = config['exclude'] + tracking_file = config['tracking'] + seeding_file = config['seeding'] + anat_file = config['anat'] # Process subject's data - process_subject(hdf_subject, path, input_files, peaks_file, wm_file, - gm_file, csf_file, interface_file, include_file, - exclude_file, normalize) + process_subject(hdf_subject, input_files, peaks_file, tracking_file, + seeding_file, anat_file) def process_subject( hdf_subject, - path: str, inputs: str, peaks: str, - wm: str, - gm: str, - csf: str, - interface: str, - include: str, - exclude: str, - normalize: bool, + tracking: str, + seeding: str, + anat: str, ): + """ Process a subject's data and save it in the hdf5 file. + + Parameters + ---------- + hdf_subject : h5py.Group + HDF5 group to save the data. + inputs : list of str + List of input files. + peaks : str + Peaks file. + tracking : str + Tracking mask file. + seeding : str + Seeding mask file. + anat : str + Anatomical file. """ - Args: - hdf_subject: - inputs: - peaks: - wm: - gm: - csf: - interface: - include: - exclude: - normalize: - - """ - - ref_volume = nib.load(join(path, inputs[0])) + ref_volume = nib.load(inputs[0]) affine = ref_volume.affine header = ref_volume.header - input_volumes = [nib.load(join(path, f)).get_fdata() for f in inputs] + input_volumes = [nib.load(f).get_fdata() for f in inputs] print('Using as inputs', inputs) for i, v in enumerate(input_volumes): if len(v.shape) == 3: input_volumes[i] = v[..., None] - - if normalize: - print('Normalizing first signal volume') - input_volume = min_max_normalize_data_volume(input_volumes[0]) - else: - input_volume = input_volumes[0] + input_volume = input_volumes[0] signal = np.concatenate([input_volume] + input_volumes[1:], axis=-1) - # Save processed data + signal_image = Nifti1Image( signal, affine, @@ -208,36 +151,30 @@ def process_subject( add_volume_to_hdf5(hdf_subject, signal_image, 'input_volume') - peaks_image = nib.load(join(path, peaks)) + peaks_image = nib.load(peaks) add_volume_to_hdf5(hdf_subject, peaks_image, 'peaks_volume') - wm_mask_image = nib.load(join(path, wm)) - add_volume_to_hdf5(hdf_subject, wm_mask_image, 'wm_volume') - - gm_mask_image = nib.load(join(path, gm)) - add_volume_to_hdf5(hdf_subject, gm_mask_image, 'gm_volume') - - csf_mask_image = nib.load(join(path, csf)) - add_volume_to_hdf5(hdf_subject, csf_mask_image, 'csf_volume') + tracking_mask_image = nib.load(tracking) + add_volume_to_hdf5(hdf_subject, tracking_mask_image, 'tracking_volume') - interface_mask_image = nib.load(join(path, interface)) - add_volume_to_hdf5(hdf_subject, interface_mask_image, 'interface_volume') + seeding_mask_image = nib.load(seeding) + add_volume_to_hdf5(hdf_subject, seeding_mask_image, 'seeding_volume') - include_mask_image = nib.load(join(path, include)) - add_volume_to_hdf5(hdf_subject, include_mask_image, 'include_volume') - - exclude_mask_image = nib.load(join(path, exclude)) - add_volume_to_hdf5(hdf_subject, exclude_mask_image, 'exclude_volume') + anat_image = nib.load(anat) + add_volume_to_hdf5(hdf_subject, anat_image, 'anat_volume') def add_volume_to_hdf5(hdf_subject, volume_img, volume_name): - """ - - Args: - hdf_subject: - volume_img: - volume_name: - + """ Add a volume to the hdf5 file. + + Parameters + ---------- + hdf_subject : h5py.Group + HDF5 group to save the data. + volume_img : nibabel.Nifti1Image + Volume to save. + volume_name : str + Name of the volume. """ hdf_input_volume = hdf_subject.create_group(volume_name) @@ -245,5 +182,34 @@ def add_volume_to_hdf5(hdf_subject, volume_img, volume_name): hdf_input_volume.create_dataset('data', data=volume_img.get_fdata()) +def parse_args(): + + parser = argparse.ArgumentParser( + description=parse_args.__doc__, + formatter_class=RawTextHelpFormatter) + parser.add_argument('config_file', type=str, + help="Configuration file to load subjects and their" + " volumes.") + parser.add_argument('output', type=str, + help="Output filename including path") + + basis_group = parser.add_argument_group('Basis options') + add_sh_basis_args(basis_group) + + arguments = parser.parse_args() + if arguments.sh_basis == 'tournier07': + parser.error('Only descoteaux07 basis is supported') + return arguments + + +def main(): + """ Parse args, generate dataset and save it on disk """ + args = parse_args() + + with Timer("Generating dataset", newline=True): + generate_dataset(config_file=args.config_file, + output=args.output) + + if __name__ == "__main__": main() diff --git a/TrackToLearn/datasets/processing.py b/TrackToLearn/datasets/processing.py deleted file mode 100644 index 69b4712..0000000 --- a/TrackToLearn/datasets/processing.py +++ /dev/null @@ -1,43 +0,0 @@ -import numpy as np - -from typing import Optional - - -def min_max_normalize_data_volume( - data: np.ndarray, - normalization_mask: Optional[np.ndarray] = None -) -> np.ndarray: - """ Apply zero-centering and variance normalization to a data volume along each - modality in the last axis (for voxels inside a given mask) - - Parameters: - ----------- - data_sh : ndarray of shape (X, Y, Z, #modalities) - Volume to normalize along each modality - normalization_mask : binary ndarray of shape (X, Y, Z) - 3D mask defining which voxels should be used for normalization. - If None, all non-zero voxels will be used. - - Returns - ------- - normalized_data : ndarray of shape (X, Y, Z, #modalities) - Normalized data volume, with zero-mean and unit variance along each - axis of the last dimension - """ - # Normalization in each direction (zero mean and unit variance) - if normalization_mask is None: - # If no mask is given, use non-zero data voxels - normalization_mask = np.zeros(data.shape[:3], dtype=np.int32) - nonzero_idx = np.nonzero(data.sum(axis=-1)) - normalization_mask[nonzero_idx] = 1 - else: - # Mask resolution must fit DWI resolution - assert normalization_mask.shape == data.shape[:3], \ - "Normalization mask resolution does not fit data..." - - normalized_data = data.copy() - idx = np.nonzero(normalization_mask) - v = normalized_data[idx] - normalized_data[idx] = (v - v.min()) / (v.max() - v.min()) - - return normalized_data diff --git a/TrackToLearn/datasets/utils.py b/TrackToLearn/datasets/utils.py index a964dde..016e6f0 100644 --- a/TrackToLearn/datasets/utils.py +++ b/TrackToLearn/datasets/utils.py @@ -1,8 +1,8 @@ +import nibabel as nib import numpy as np from dipy.data import get_sphere from dipy.reconst.csdeconv import sph_harm_ind_list -from dwi_ml.data.dataset.streamline_containers import LazySFTData from scilpy.reconst.utils import get_sh_order_and_fullness from scilpy.reconst.multi_processes import convert_sh_basis @@ -14,12 +14,10 @@ class MRIDataVolume(object): """ def __init__( - self, data=None, affine_vox2rasmm=None, subject_id=None, filename=None + self, data=None, affine_vox2rasmm=None ): self._data = data self.affine_vox2rasmm = affine_vox2rasmm - self.subject_id = subject_id - self.filename = filename @classmethod def from_hdf_group(cls, hdf, group, default=None): @@ -29,9 +27,7 @@ def from_hdf_group(cls, hdf, group, default=None): affine_vox2rasmm = np.array( hdf[group].attrs['vox2rasmm'], dtype=np.float32) except KeyError: - print( - "{} is absent from {}, replacing it with empty volume.".format( - group, hdf)) + print('Missing {} from dataset'.format(group)) data = np.zeros_like(hdf[default]['data'], dtype=np.float32) affine_vox2rasmm = np.array( hdf[default].attrs['vox2rasmm'], dtype=np.float32) @@ -58,28 +54,16 @@ def __init__( subject_id: str, input_dv=None, peaks=None, - wm=None, - gm=None, - csf=None, - include=None, - exclude=None, - interface=None, - sft=None, - rewards=None, - states=None + tracking=None, + seeding=None, + reference=None, ): self.subject_id = subject_id self.input_dv = input_dv self.peaks = peaks - self.wm = wm - self.gm = gm - self.csf = csf - self.include = include - self.exclude = exclude - self.interface = interface - self.rewards = rewards - self.states = states - self.sft = sft + self.tracking = tracking + self.seeding = seeding + self.reference = reference @classmethod def from_hdf_subject(cls, hdf_file, subject_id): @@ -88,29 +72,17 @@ def from_hdf_subject(cls, hdf_file, subject_id): input_dv = MRIDataVolume.from_hdf_group(hdf_subject, 'input_volume') peaks = MRIDataVolume.from_hdf_group(hdf_subject, 'peaks_volume') - wm = MRIDataVolume.from_hdf_group(hdf_subject, 'wm_volume') - gm = MRIDataVolume.from_hdf_group(hdf_subject, 'gm_volume') - csf = MRIDataVolume.from_hdf_group( - hdf_subject, 'csf_volume', 'wm_volume') - include = MRIDataVolume.from_hdf_group( - hdf_subject, 'include_volume', 'wm_volume') - exclude = MRIDataVolume.from_hdf_group( - hdf_subject, 'exclude_volume', 'wm_volume') - interface = MRIDataVolume.from_hdf_group( - hdf_subject, 'interface_volume', 'wm_volume') - - states = None - sft = None - rewards = None - if 'streamlines' in hdf_subject: - sft = LazySFTData.init_from_hdf_info( - hdf_subject['streamlines']) - rewards = np.array(hdf_subject['streamlines']['rewards']) + tracking = MRIDataVolume.from_hdf_group(hdf_subject, 'tracking_volume') + seeding = MRIDataVolume.from_hdf_group( + hdf_subject, 'seeding_volume', 'tracking_volume') + anatomy = MRIDataVolume.from_hdf_group( + hdf_subject, 'anat_volume', 'tracking_volume') + + reference = nib.Nifti1Image(anatomy.data, anatomy.affine_vox2rasmm) return cls( - subject_id, input_dv=input_dv, wm=wm, gm=gm, csf=csf, - include=include, exclude=exclude, interface=interface, - peaks=peaks, sft=sft, rewards=rewards, states=states) + subject_id, input_dv=input_dv, tracking=tracking, + seeding=seeding, reference=reference, peaks=peaks) def convert_length_mm2vox( diff --git a/TrackToLearn/environments/backward_tracking_env.py b/TrackToLearn/environments/backward_tracking_env.py deleted file mode 100644 index 44565c2..0000000 --- a/TrackToLearn/environments/backward_tracking_env.py +++ /dev/null @@ -1,233 +0,0 @@ -import functools -import numpy as np -import torch - -from dipy.io.stateful_tractogram import StatefulTractogram -from nibabel.streamlines import Tractogram - -from TrackToLearn.datasets.utils import ( - convert_length_mm2vox, -) - -from TrackToLearn.environments.reward import Reward - -from TrackToLearn.environments.stopping_criteria import ( - is_flag_set, - BinaryStoppingCriterion, - CmcStoppingCriterion, - StoppingFlags) - -from TrackToLearn.environments.tracking_env import TrackingEnvironment - -from TrackToLearn.environments.utils import ( - get_neighborhood_directions, - is_too_curvy, - is_too_long) - - -class BackwardTrackingEnvironment(TrackingEnvironment): - """ Pre-initialized environment. Tracking will start at the seed from - flipped half-streamlines. - """ - - def __init__(self, env: TrackingEnvironment, env_dto: dict): - - # Volumes and masks - self.reference = env.reference - self.affine_vox2rasmm = env.affine_vox2rasmm - self.affine_rasmm2vox = env.affine_rasmm2vox - - self.data_volume = env.data_volume - self.tracking_mask = env.tracking_mask - self.target_mask = env.target_mask - self.include_mask = env.include_mask - self.exclude_mask = env.exclude_mask - self.peaks = env.peaks - - self.normalize_obs = False # env_dto['normalize'] - self.obs_rms = None - - self._state_size = None # to be calculated later - - # Tracking parameters - self.n_signal = env_dto['n_signal'] - self.n_dirs = env_dto['n_dirs'] - self.theta = theta = env_dto['theta'] - self.cmc = env_dto['cmc'] - self.asymmetric = env_dto['asymmetric'] - - step_size_mm = env_dto['step_size'] - min_length_mm = env_dto['min_length'] - max_length_mm = env_dto['max_length'] - add_neighborhood_mm = env_dto['add_neighborhood'] - - # Reward parameters - self.alignment_weighting = env_dto['alignment_weighting'] - self.straightness_weighting = env_dto['straightness_weighting'] - self.length_weighting = env_dto['length_weighting'] - self.target_bonus_factor = env_dto['target_bonus_factor'] - self.exclude_penalty_factor = env_dto['exclude_penalty_factor'] - self.angle_penalty_factor = env_dto['angle_penalty_factor'] - self.compute_reward = env_dto['compute_reward'] - - self.rng = env_dto['rng'] - self.device = env_dto['device'] - - # Stopping criteria is a dictionary that maps `StoppingFlags` - # to functions that indicate whether streamlines should stop or not - self.stopping_criteria = {} - mask_data = env.tracking_mask.data.astype(np.uint8) - - self.step_size = convert_length_mm2vox( - step_size_mm, - self.affine_vox2rasmm) - self.min_length = min_length_mm - self.max_length = max_length_mm - - # Compute maximum length - self.max_nb_steps = int(self.max_length / step_size_mm) - self.min_nb_steps = int(self.min_length / step_size_mm) - - if self.compute_reward: - self.reward_function = Reward( - peaks=self.peaks, - exclude=self.exclude_mask, - target=self.target_mask, - max_nb_steps=self.max_nb_steps, - theta=self.theta, - min_nb_steps=self.min_nb_steps, - asymmetric=self.asymmetric, - alignment_weighting=self.alignment_weighting, - straightness_weighting=self.straightness_weighting, - length_weighting=self.length_weighting, - target_bonus_factor=self.target_bonus_factor, - exclude_penalty_factor=self.exclude_penalty_factor, - angle_penalty_factor=self.angle_penalty_factor, - scoring_data=None, # TODO: Add scoring back - reference=env.reference) - - self.stopping_criteria[StoppingFlags.STOPPING_LENGTH] = \ - functools.partial(is_too_long, - max_nb_steps=self.max_nb_steps) - - self.stopping_criteria[ - StoppingFlags.STOPPING_CURVATURE] = \ - functools.partial(is_too_curvy, max_theta=theta) - - if self.cmc: - cmc_criterion = CmcStoppingCriterion( - self.include_mask.data, - self.exclude_mask.data, - self.affine_vox2rasmm, - self.step_size, - self.min_nb_steps) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = cmc_criterion - else: - binary_criterion = BinaryStoppingCriterion( - mask_data, - 0.5) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = \ - binary_criterion - - # self.stopping_criteria[ - # StoppingFlags.STOPPING_LOOP] = \ - # functools.partial(is_looping, - # loop_threshold=300) - - # Convert neighborhood to voxel space - self.add_neighborhood_vox = None - if add_neighborhood_mm: - self.add_neighborhood_vox = convert_length_mm2vox( - add_neighborhood_mm, - self.affine_vox2rasmm) - self.neighborhood_directions = torch.tensor( - get_neighborhood_directions( - radius=self.add_neighborhood_vox), - dtype=torch.float16).to(self.device) - - @classmethod - def from_env( - cls, - env_dto: dict, - env: TrackingEnvironment, - ): - """ Initialize the environment from a `forward` environment. - """ - return cls(env, env_dto) - - def reset(self, streamlines: np.ndarray) -> np.ndarray: - """ Initialize tracking based on half-streamlines. - - Parameters - ---------- - streamlines : list - Half-streamlines to initialize environment - - Returns - ------- - state: numpy.ndarray - Initial state for RL model - """ - - # Half-streamlines - self.seeding_streamlines = [s[:] for s in streamlines] - N = len(streamlines) - - # Jagged arrays ugh - # This is dirty, clean up asap - self.half_lengths = np.asarray( - [len(s) for s in self.seeding_streamlines]) - max_half_len = max(self.half_lengths) - half_streamlines = np.zeros( - (N, max_half_len, 3), dtype=np.float32) - - for i, s in enumerate(self.seeding_streamlines): - le = self.half_lengths[i] - half_streamlines[i, :le, :] = s - - self.initial_points = np.asarray([s[0] for s in streamlines]) - - # Initialize seeds as streamlines - self.streamlines = np.concatenate((np.zeros( - (N, self.max_nb_steps + 1, 3), - dtype=np.float32), half_streamlines), axis=1) - - self.streamlines = np.flip(self.streamlines, axis=1) - # This means that all streamlines in the batch are limited by the - # longest half-streamline :( - self.lengths = np.ones(N, dtype=np.int32) * max_half_len - - # Done flags for tracking backwards - self.dones = np.full(N, False) - self.max_half_len = max_half_len - self.length = max_half_len - self.continue_idx = np.arange(N) - self.flags = np.zeros(N, dtype=int) - - # Signal - return self._format_state(self.streamlines[:, :self.length]) - - def get_streamlines(self) -> StatefulTractogram: - - tractogram = Tractogram() - # Get both parts of the streamlines. - stopped_streamlines = [self.streamlines[ - i, self.max_half_len - self.half_lengths[i]:self.lengths[i], :] - for i in range(len(self.streamlines))] - - # Remove last point if the resulting segment had an angle too high. - flags = is_flag_set( - self.flags, StoppingFlags.STOPPING_CURVATURE) - stopped_streamlines = [ - s[:-1] if f else s for f, s in zip(flags, stopped_streamlines)] - - stopped_seeds = self.initial_points - - # Harvested tractogram - tractogram = Tractogram( - streamlines=stopped_streamlines, - data_per_streamline={"seeds": stopped_seeds, - }, - affine_to_rasmm=self.affine_vox2rasmm) - - return tractogram diff --git a/TrackToLearn/environments/env.py b/TrackToLearn/environments/env.py index 5c0abbb..b007100 100644 --- a/TrackToLearn/environments/env.py +++ b/TrackToLearn/environments/env.py @@ -1,201 +1,275 @@ import functools -import h5py -import numpy as np -import nibabel as nib -import torch - -from gymnasium.wrappers.normalize import RunningMeanStd -from nibabel.streamlines import Tractogram from typing import Callable, Dict, Tuple -from TrackToLearn.datasets.utils import ( - convert_length_mm2vox, - MRIDataVolume, - SubjectData, - set_sh_order_basis -) - -from TrackToLearn.environments.reward import Reward - +import nibabel as nib +import numpy as np +import torch +from dipy.core.sphere import HemiSphere +from dipy.data import get_sphere +from dipy.direction.peaks import reshape_peaks_for_visualization +from dipy.tracking import utils as track_utils +from dwi_ml.data.processing.volume.interpolation import \ + interpolate_volume_in_neighborhood +from dwi_ml.data.processing.space.neighborhood import \ + get_neighborhood_vectors_axes +from scilpy.reconst.utils import (find_order_from_nb_coeff, get_b_matrix, + get_maximas) +from torch.utils.data import DataLoader + +from TrackToLearn.datasets.SubjectDataset import SubjectDataset +from TrackToLearn.datasets.utils import (MRIDataVolume, + convert_length_mm2vox, + set_sh_order_basis) +from TrackToLearn.environments.local_reward import PeaksAlignmentReward +from TrackToLearn.environments.oracle_reward import OracleReward +from TrackToLearn.environments.reward import RewardFunction from TrackToLearn.environments.stopping_criteria import ( - BinaryStoppingCriterion, - CmcStoppingCriterion, + BinaryStoppingCriterion, OracleStoppingCriterion, StoppingFlags) +from TrackToLearn.environments.utils import ( # is_looping, + is_too_curvy, is_too_long) +from TrackToLearn.utils.utils import normalize_vectors -from TrackToLearn.environments.utils import ( - get_neighborhood_directions, - get_sh, - is_too_curvy, - is_too_long) +# from dipy.io.utils import get_reference_info class BaseEnv(object): """ - Abstract tracking environment. - TODO: Add more explanations + Abstract tracking environment. This class should not be used directly. + Instead, use `TrackingEnvironment` or `InferenceTrackingEnvironment`. + + Track-to-Learn environments are based on OpenAI Gym environments. They + are used to train reinforcement learning algorithms. They also emulate + "Trackers" in dipy by handling streamline propagation, stopping criteria, + and seeds. + + Since many streamlines are propagated in parallel, the environment is + similar to VectorizedEnvironments in the Gym definition. However, the + environment is not vectorized in the sense that it does not reset + trajectories (streamlines) independently. + + TODO: reset trajectories independently ? + """ def __init__( self, - input_volume: MRIDataVolume, - tracking_mask: MRIDataVolume, - target_mask: MRIDataVolume, - seeding_mask: MRIDataVolume, - peaks: MRIDataVolume, + subject_data: str, + split_id: str, env_dto: dict, - include_mask: MRIDataVolume = None, - exclude_mask: MRIDataVolume = None, ): """ + Initialize the environment. This should not be called directly. + Instead, use `from_dataset` or `from_files`. + Parameters ---------- - input_volume: MRIDataVolume - Volumetric data containing the SH coefficients - tracking_mask: MRIDataVolume - Volumetric mask where tracking is allowed - target_mask: MRIDataVolume - Mask representing the tracking endpoints - seeding_mask: MRIDataVolume - Mask where seeding should be done - peaks: MRIDataVolume - Volume containing the fODFs peaks + dataset_file: str + Path to the HDF5 file containing the dataset. + split_id: str + Name of the split to load (e.g. 'training', + 'validation', 'testing'). + subjects: list + List of subjects to load. env_dto: dict DTO containing env. parameters - include_mask: MRIDataVolume - Mask representing the tracking go zones. Only useful if - using CMC. - exclude_mask: MRIDataVolume - Mask representing the tracking no-go zones. Only useful if - using CMC. - """ - # Volumes and masks - self.affine_vox2rasmm = input_volume.affine_vox2rasmm - self.affine_rasmm2vox = np.linalg.inv(self.affine_vox2rasmm) + """ - self.data_volume = torch.tensor( - input_volume.data, dtype=torch.float32, device=env_dto['device']) - self.tracking_mask = tracking_mask - self.target_mask = target_mask - self.include_mask = include_mask - self.exclude_mask = exclude_mask - self.peaks = peaks + # If the subject data is a string, it is assumed to be a path to + # an HDF5 file. Otherwise, it is assumed to be a list of volumes + if type(subject_data) is str: + self.dataset_file = subject_data + self.split = split_id + + def collate_fn(data): + return data + + self.dataset = SubjectDataset( + self.dataset_file, self.split) + self.loader = DataLoader(self.dataset, 1, shuffle=True, + collate_fn=collate_fn, + num_workers=2) + self.loader_iter = iter(self.loader) + else: + self.subject_data = subject_data + self.split = split_id + # Unused: this is from an attempt to normalize the input data + # as is done by the original PPO impl + # Does not seem to be necessary here. self.normalize_obs = False # env_dto['normalize'] self.obs_rms = None self._state_size = None # to be calculated later - self.reference = env_dto['reference'] - # Tracking parameters - self.n_signal = env_dto['n_signal'] self.n_dirs = env_dto['n_dirs'] - self.theta = theta = env_dto['theta'] + self.theta = env_dto['theta'] + # Number of seeds per voxel self.npv = env_dto['npv'] - self.cmc = env_dto['cmc'] - self.asymmetric = env_dto['asymmetric'] + # Whether to use CMC or binary stopping criterion + self.binary_stopping_threshold = env_dto['binary_stopping_threshold'] - step_size_mm = env_dto['step_size'] - min_length_mm = env_dto['min_length'] - max_length_mm = env_dto['max_length'] - add_neighborhood_mm = env_dto['add_neighborhood'] + # Step-size and min/max lengths are typically defined in mm + # by the user, but need to be converted to voxels. + self.step_size_mm = env_dto['step_size'] + self.min_length_mm = env_dto['min_length'] + self.max_length_mm = env_dto['max_length'] + + # Oracle parameters + self.oracle_checkpoint = env_dto['oracle_checkpoint'] + self.oracle_stopping_criterion = env_dto['oracle_stopping_criterion'] + + # Tractometer parameters + self.scoring_data = env_dto['scoring_data'] # Reward parameters - self.alignment_weighting = env_dto['alignment_weighting'] - self.straightness_weighting = env_dto['straightness_weighting'] - self.length_weighting = env_dto['length_weighting'] - self.target_bonus_factor = env_dto['target_bonus_factor'] - self.exclude_penalty_factor = env_dto['exclude_penalty_factor'] - self.angle_penalty_factor = env_dto['angle_penalty_factor'] self.compute_reward = env_dto['compute_reward'] - self.scoring_data = env_dto['scoring_data'] + # "Local" reward parameters + self.alignment_weighting = env_dto['alignment_weighting'] + # "Sparse" reward parameters + self.oracle_bonus = env_dto['oracle_bonus'] + # Other parameters self.rng = env_dto['rng'] self.device = env_dto['device'] - # Stopping criteria is a dictionary that maps `StoppingFlags` - # to functions that indicate whether streamlines should stop or not - self.stopping_criteria = {} - mask_data = tracking_mask.data.astype(np.uint8) + # Load one subject as an example + self.load_subject() + + def load_subject( + self, + ): + """ Load a random subject from the dataset. This is used to + initialize the environment. """ + + if hasattr(self, 'dataset_file'): + + if hasattr(self, 'subject_id') and len(self.dataset) == 1: + return + + try: + (sub_id, input_volume, tracking_mask, seeding_mask, + peaks, reference) = next(self.loader_iter)[0] + except StopIteration: + self.loader_iter = iter(self.loader) + (sub_id, input_volume, tracking_mask, seeding_mask, + peaks, reference) = next(self.loader_iter)[0] + + self.subject_id = sub_id + # Affines + self.reference = reference + self.affine_vox2rasmm = input_volume.affine_vox2rasmm + self.affine_rasmm2vox = np.linalg.inv(self.affine_vox2rasmm) + + # Volumes and masks + self.data_volume = torch.from_numpy( + input_volume.data).to(self.device, dtype=torch.float32) + else: + (input_volume, tracking_mask, seeding_mask, peaks, + reference) = self.subject_data + self.affine_vox2rasmm = input_volume.affine_vox2rasmm + self.affine_rasmm2vox = np.linalg.inv(self.affine_vox2rasmm) + + # Volumes and masks + self.data_volume = torch.from_numpy( + input_volume.data).to(self.device, dtype=torch.float32) + + self.reference = reference + + self.tracking_mask = tracking_mask + self.peaks = peaks + mask_data = tracking_mask.data.astype(np.uint8) self.seeding_data = seeding_mask.data.astype(np.uint8) self.step_size = convert_length_mm2vox( - step_size_mm, + self.step_size_mm, self.affine_vox2rasmm) - self.min_length = min_length_mm - self.max_length = max_length_mm + self.min_length = self.min_length_mm + self.max_length = self.max_length_mm # Compute maximum length - self.max_nb_steps = int(self.max_length / step_size_mm) - self.min_nb_steps = int(self.min_length / step_size_mm) + self.max_nb_steps = int(self.max_length / self.step_size_mm) + self.min_nb_steps = int(self.min_length / self.step_size_mm) - if self.compute_reward: - self.reward_function = Reward( - peaks=self.peaks, - exclude=self.exclude_mask, - target=self.target_mask, - max_nb_steps=self.max_nb_steps, - theta=self.theta, - min_nb_steps=self.min_nb_steps, - asymmetric=self.asymmetric, - alignment_weighting=self.alignment_weighting, - straightness_weighting=self.straightness_weighting, - length_weighting=self.length_weighting, - target_bonus_factor=self.target_bonus_factor, - exclude_penalty_factor=self.exclude_penalty_factor, - angle_penalty_factor=self.angle_penalty_factor, - scoring_data=self.scoring_data, - reference=self.reference) + # Neighborhood used as part of the state + self.add_neighborhood_vox = convert_length_mm2vox( + self.step_size_mm, + self.affine_vox2rasmm) + self.neighborhood_directions = torch.cat( + (torch.zeros((1, 3)), + get_neighborhood_vectors_axes(1, self.add_neighborhood_vox)) + ).to(self.device) + # Tracking seeds + self.seeds = track_utils.random_seeds_from_mask( + self.seeding_data, + np.eye(4), + seeds_count=self.npv) + # print( + # '{} has {} seeds.'.format(self.__class__.__name__, + # len(self.seeds))) + + # =========================================== + # Stopping criteria + # =========================================== + + # Stopping criteria is a dictionary that maps `StoppingFlags` + # to functions that indicate whether streamlines should stop or not + + # TODO: Make all stopping criteria classes. + # TODO?: Use dipy's stopping criteria instead of custom ones ? + self.stopping_criteria = {} + + # Length criterion self.stopping_criteria[StoppingFlags.STOPPING_LENGTH] = \ functools.partial(is_too_long, max_nb_steps=self.max_nb_steps) + # Angle between segment (curvature criterion) self.stopping_criteria[ StoppingFlags.STOPPING_CURVATURE] = \ - functools.partial(is_too_curvy, max_theta=theta) - - if self.cmc: - cmc_criterion = CmcStoppingCriterion( - self.include_mask.data, - self.exclude_mask.data, + functools.partial(is_too_curvy, max_theta=self.theta) + + # Stopping criterion according to an oracle + if self.oracle_checkpoint and self.oracle_stopping_criterion: + self.stopping_criteria[ + StoppingFlags.STOPPING_ORACLE] = OracleStoppingCriterion( + self.oracle_checkpoint, + self.min_nb_steps * 5, + self.reference, self.affine_vox2rasmm, - self.step_size, - self.min_nb_steps) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = cmc_criterion - else: - binary_criterion = BinaryStoppingCriterion( - mask_data, - 0.5) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = \ - binary_criterion - - # self.stopping_criteria[ - # StoppingFlags.STOPPING_LOOP] = \ - # functools.partial(is_looping, - # loop_threshold=300) - - # Convert neighborhood to voxel space - self.add_neighborhood_vox = None - if add_neighborhood_mm: - self.add_neighborhood_vox = convert_length_mm2vox( - add_neighborhood_mm, - self.affine_vox2rasmm) - self.neighborhood_directions = torch.tensor( - get_neighborhood_directions( - radius=self.add_neighborhood_vox), - dtype=torch.float16).to(self.device) + self.device) - # Tracking seeds - self.seeds = self._get_tracking_seeds_from_mask( - self.seeding_data, - self.npv, - self.rng) - print( - '{} has {} seeds.'.format(self.__class__.__name__, - len(self.seeds))) + # Mask criterion (either binary or CMC) + binary_criterion = BinaryStoppingCriterion( + mask_data, + self.binary_stopping_threshold) + self.stopping_criteria[StoppingFlags.STOPPING_MASK] = \ + binary_criterion + + # ========================================== + # Reward function + # ========================================= + + # Reward function and reward factors + if self.compute_reward: + # Reward streamline according to alignment with local peaks + peaks_reward = PeaksAlignmentReward(self.peaks) + oracle_reward = OracleReward(self.oracle_checkpoint, + self.min_nb_steps, + self.reference, + self.affine_vox2rasmm, + self.device) + + # Combine all reward factors into the reward function + self.reward_function = RewardFunction( + [peaks_reward, + oracle_reward], + [self.alignment_weighting, + self.oracle_bonus]) @classmethod def from_dataset( @@ -203,107 +277,102 @@ def from_dataset( env_dto: dict, split: str, ): + """ Initialize the environment from an HDF5. + + Parameters + ---------- + env_dto: dict + DTO containing env. parameters + split: str + Name of the split to load (e.g. 'training', 'validation', + 'testing'). + + Returns + ------- + env: BaseEnv + Environment initialized from a dataset. + """ + dataset_file = env_dto['dataset_file'] - subject_id = env_dto['subject_id'] - interface_seeding = env_dto['interface_seeding'] - - (input_volume, tracking_mask, include_mask, exclude_mask, target_mask, - seeding_mask, peaks) = \ - BaseEnv._load_dataset( - dataset_file, split, subject_id, interface_seeding - ) - - return cls( - input_volume, - tracking_mask, - target_mask, - seeding_mask, - peaks, - env_dto, - include_mask, - exclude_mask, - ) + + env = cls(dataset_file, split, env_dto) + return env @classmethod def from_files( cls, env_dto: dict, ): + """ Initialize the environment from files. This is useful for + tracking from a trained model. + + Parameters + ---------- + env_dto: dict + DTO containing env. parameters + + Returns + ------- + env: BaseEnv + Environment initialized from files. + """ + in_odf = env_dto['in_odf'] - wm_file = env_dto['wm_file'] in_seed = env_dto['in_seed'] in_mask = env_dto['in_mask'] sh_basis = env_dto['sh_basis'] + reference = env_dto['reference'] - input_volume, tracking_mask, seeding_mask = BaseEnv._load_files( - in_odf, - wm_file, - in_seed, - in_mask, - sh_basis) - - return cls( - input_volume, - tracking_mask, - None, - seeding_mask, - None, - env_dto) + (input_volume, peaks_volume, tracking_mask, seeding_mask) = \ + BaseEnv._load_files( + in_odf, + in_seed, + in_mask, + sh_basis) - @classmethod - def _load_dataset( - cls, dataset_file, split_id, subject_id, interface_seeding=False - ): - """ Load data volumes and masks from the HDF5 + subj_files = (input_volume, tracking_mask, seeding_mask, + peaks_volume, reference) - Should everything be put into `self` ? Should everything be returned - instead ? - """ - - print("Loading {} from the {} set.".format(subject_id, split_id)) - # Load input volume - with h5py.File( - dataset_file, 'r' - ) as hdf_file: - print(list(hdf_file.keys())) - assert split_id in ['training', 'validation', 'testing'] - split_set = hdf_file[split_id] - tracto_data = SubjectData.from_hdf_subject( - split_set, subject_id) - tracto_data.input_dv.subject_id = subject_id - input_volume = tracto_data.input_dv - - # Load peaks for reward - peaks = tracto_data.peaks - - # Load tracking mask - tracking_mask = tracto_data.wm - - # Load target and exclude masks - target_mask = tracto_data.gm - - include_mask = tracto_data.include - exclude_mask = tracto_data.exclude - - if interface_seeding: - print("Seeding from the interface") - seeding = tracto_data.interface - else: - print("Seeding from the WM.") - seeding = tracto_data.wm - - return (input_volume, tracking_mask, include_mask, exclude_mask, - target_mask, seeding, peaks) + return cls(subj_files, 'testing', env_dto) @classmethod def _load_files( cls, signal_file, - wm_file, in_seed, in_mask, - sh_basis + sh_basis, ): + """ Load data volumes and masks from files. This is useful for + tracking from a trained model. + + If the signal is not in descoteaux07 basis, it will be converted. The + WM mask will be loaded and concatenated to the signal. Additionally, + peaks will be computed from the signal. + + Parameters + ---------- + signal_file: str + Path to the signal file (e.g. SH coefficients). + in_seed: str + Path to the seeding mask. + in_mask: str + Path to the tracking mask. + sh_basis: str + Basis of the SH coefficients. + + Returns + ------- + signal_volume: MRIDataVolume + Volumetric data containing the SH coefficients + peaks_volume: MRIDataVolume + Volume containing the fODFs peaks + tracking_volume: MRIDataVolume + Volumetric mask where tracking is allowed + seeding_volume: MRIDataVolume + Mask where seeding should be done + """ + signal = nib.load(signal_file) # Assert that the subject has iso voxels, else stuff will get @@ -316,36 +385,73 @@ def _load_files( data = set_sh_order_basis(signal.get_fdata(dtype=np.float32), sh_basis, - target_order=6, + target_order=8, target_basis='descoteaux07') + # Compute peaks from signal + # Does not work if signal is not fODFs + npeaks = 5 + odf_shape_3d = data.shape[:-1] + peak_dirs = np.zeros((odf_shape_3d + (npeaks, 3))) + peak_values = np.zeros((odf_shape_3d + (npeaks, ))) + + sphere = HemiSphere.from_sphere(get_sphere("repulsion724") + ).subdivide(0) + + b_matrix = get_b_matrix( + find_order_from_nb_coeff(data), sphere, "descoteaux07") + + for idx in np.argwhere(np.sum(data, axis=-1)): + idx = tuple(idx) + directions, values, indices = get_maximas(data[idx], + sphere, b_matrix, + 0.1, 0) + if values.shape[0] != 0: + n = min(npeaks, values.shape[0]) + peak_dirs[idx][:n] = directions[:n] + peak_values[idx][:n] = values[:n] + + X, Y, Z, N, P = peak_dirs.shape + peak_values = np.divide(peak_values, peak_values[..., 0, None], + out=np.zeros_like(peak_values), + where=peak_values[..., 0, None] != 0) + peak_dirs[...] *= peak_values[..., :, None] + peak_dirs = reshape_peaks_for_visualization(peak_dirs) + + # Load rest of volumes seeding = nib.load(in_seed) tracking = nib.load(in_mask) - wm = nib.load(wm_file) - wm_data = wm.get_fdata() - if len(wm_data.shape) == 3: - wm_data = wm_data[..., None] - - signal_data = np.concatenate( - [data, wm_data], axis=-1) - + signal_data = data signal_volume = MRIDataVolume( - signal_data, signal.affine, filename=signal_file) + signal_data, signal.affine) + + peaks_volume = MRIDataVolume( + peak_dirs, signal.affine) seeding_volume = MRIDataVolume( - seeding.get_fdata(), seeding.affine, filename=in_seed) + seeding.get_fdata(), seeding.affine) tracking_volume = MRIDataVolume( - tracking.get_fdata(), tracking.affine, filename=in_mask) + tracking.get_fdata(), tracking.affine) - return (signal_volume, tracking_volume, seeding_volume) + return (signal_volume, peaks_volume, tracking_volume, seeding_volume) def get_state_size(self): + """ Returns the size of the state space by computing the size of + an example state. + + Returns + ------- + state_size: int + Size of the state space. + """ + example_state = self.reset(0, 1) self._state_size = example_state.shape[1] return self._state_size def get_action_size(self): - """ TODO: Support spherical actions""" + """ Returns the size of the action space. + """ return 3 @@ -364,99 +470,16 @@ def get_voxel_size(self): return voxel_size - def set_step_size(self, step_size_mm): - """ Set a different step size (in voxels) than computed by the - environment. This is necessary when the voxel size between training - and tracking envs is different. - """ - - self.step_size = convert_length_mm2vox( - step_size_mm, - self.affine_vox2rasmm) - - if self.add_neighborhood_vox: - self.add_neighborhood_vox = convert_length_mm2vox( - step_size_mm, - self.affine_vox2rasmm) - self.neighborhood_directions = torch.tensor( - get_neighborhood_directions( - radius=self.add_neighborhood_vox), - dtype=torch.float16).to(self.device) - - # Compute maximum length - self.max_nb_steps = int(self.max_length / step_size_mm) - self.min_nb_steps = int(self.min_length / step_size_mm) - - if self.compute_reward: - self.reward_function = Reward( - peaks=self.peaks, - exclude=self.exclude_mask, - target=self.target_mask, - max_nb_steps=self.max_nb_steps, - theta=self.theta, - min_nb_steps=self.min_nb_steps, - asymmetric=self.asymmetric, - alignment_weighting=self.alignment_weighting, - straightness_weighting=self.straightness_weighting, - length_weighting=self.length_weighting, - target_bonus_factor=self.target_bonus_factor, - exclude_penalty_factor=self.exclude_penalty_factor, - angle_penalty_factor=self.angle_penalty_factor, - scoring_data=self.scoring_data, - reference=self.reference) - - self.stopping_criteria[StoppingFlags.STOPPING_LENGTH] = \ - functools.partial(is_too_long, - max_nb_steps=self.max_nb_steps) - - if self.cmc: - cmc_criterion = CmcStoppingCriterion( - self.include_mask.data, - self.exclude_mask.data, - self.affine_vox2rasmm, - self.step_size, - self.min_nb_steps) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = cmc_criterion - - def _normalize(self, obs): - """Normalises the observation using the running mean and variance of - the observations. Taken from Gymnasium.""" - if self.obs_rms is None: - self.obs_rms = RunningMeanStd(shape=(self._state_size,)) - self.obs_rms.update(obs) - return (obs - self.obs_rms.mean) / np.sqrt(self.obs_rms.var + 1e-8) - - def _get_tracking_seeds_from_mask( + def _format_actions( self, - mask: np.ndarray, - npv: int, - rng: np.random.RandomState - ) -> np.ndarray: - """ Given a binary seeding mask, get seeds in DWI voxel - space using the provided affine. TODO: Replace this - with scilpy's SeedGenerator - - Parameters - ---------- - mask : 3D `numpy.ndarray` - Binary seeding mask - npv : int - rng : `numpy.random.RandomState` - - Returns - ------- - seeds : `numpy.ndarray` + actions: np.ndarray, + ): + """ Format actions to be used by the environment. Scaling + actions to the step size. """ - seeds = [] - indices = np.array(np.where(mask)).T - for idx in indices: - seeds_in_seeding_voxel = idx + rng.uniform( - -0.5, - 0.5, - size=(npv, 3)) - seeds.extend(seeds_in_seeding_voxel) - seeds = np.array(seeds, dtype=np.float16) - return seeds + actions = normalize_vectors(actions) * self.step_size + + return actions def _format_state( self, @@ -477,43 +500,51 @@ def _format_state( Observations of the state, incl. previous directions. """ N, L, P = streamlines.shape + if N <= 0: return [] + + # Get the last point of each streamline segments = streamlines[:, -1, :][:, None, :] - signal = get_sh( - segments, - self.data_volume, - self.add_neighborhood_vox, - self.neighborhood_directions, - self.n_signal, - self.device - ) + # Reshape to get a list of coordinates + N, H, P = segments.shape + flat_coords = np.reshape(segments, (N * H, P)) + coords = torch.as_tensor(flat_coords).to(self.device) + # Get the SH coefficients at the last point of each streamline + # The neighborhood is used to get the SH coefficients around + # the last point + signal, _ = interpolate_volume_in_neighborhood( + self.data_volume, + coords, + self.neighborhood_directions) N, S = signal.shape + # Placeholder for the final imputs inputs = torch.zeros((N, S + (self.n_dirs * P)), device=self.device) - + # Fill the first part of the inputs with the SH coefficients inputs[:, :S] = signal + # Placeholder for the previous directions previous_dirs = np.zeros((N, self.n_dirs, P), dtype=np.float32) if L > 1: + # Compute directions from the streamlines dirs = np.diff(streamlines, axis=1) + # Fetch the N last directions previous_dirs[:, :min(dirs.shape[1], self.n_dirs), :] = \ dirs[:, :-(self.n_dirs+1):-1, :] + # Flatten the directions to fit in the inputs and send to device dir_inputs = torch.reshape( torch.from_numpy(previous_dirs).to(self.device), (N, self.n_dirs * P)) - + # Fill the second part of the inputs with the previous directions inputs[:, S:] = dir_inputs - # if self.normalize_obs and self._state_size is not None: - # inputs = self._normalize(inputs) - return inputs - def _filter_stopping_streamlines( + def _compute_stopping_flags( self, streamlines: np.ndarray, stopping_criteria: Dict[StoppingFlags, Callable] @@ -557,73 +588,17 @@ def _is_stopping(): """ pass - def reset(): - """ Initialize tracking seeds and streamlines + def reset(self): + """ Reset the environment to its initial state. """ - pass + if self.compute_reward: + self.reward_function.reset() def step(): """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done, and compute new - hidden states + Abstract method to be implemented by subclasses which defines + the behavior of the environment when taking a step. This includes + propagating the streamlines, computing the reward, and checking + which streamlines should stop. """ pass - - def render( - self, - tractogram: Tractogram = None, - filename: str = None - ): - """ Render the streamlines, either directly or through a file - Might render from "outside" the environment, like for comet - - Parameters: - ----------- - tractogram: Tractogram, optional - Object containing the streamlines and seeds - path: str, optional - If set, save the image at the specified location instead - of displaying directly - """ - from fury import window, actor - # Might be rendering from outside the environment - if tractogram is None: - tractogram = Tractogram( - streamlines=self.streamlines[:, :self.length], - data_per_streamline={ - 'seeds': self.starting_points - }) - - # Reshape peaks for displaying - X, Y, Z, M = self.peaks.data.shape - peaks = np.reshape(self.peaks.data, (X, Y, Z, 5, M//5)) - - # Setup scene and actors - scene = window.Scene() - - stream_actor = actor.streamtube(tractogram.streamlines) - peak_actor = actor.peak_slicer(peaks, - np.ones((X, Y, Z, M)), - colors=(0.2, 0.2, 1.), - opacity=0.5) - dot_actor = actor.dots(tractogram.data_per_streamline['seeds'], - color=(1, 1, 1), - opacity=1, - dot_size=2.5) - scene.add(stream_actor) - scene.add(peak_actor) - scene.add(dot_actor) - scene.reset_camera_tight(0.95) - - # Save or display scene - if filename is not None: - window.snapshot( - scene, - fname=filename, - offscreen=True, - size=(800, 800)) - else: - showm = window.ShowManager(scene, reset_camera=True) - showm.initialize() - showm.start() diff --git a/TrackToLearn/environments/gym/gym_env.py b/TrackToLearn/environments/gym/gym_env.py deleted file mode 100644 index 8de94ab..0000000 --- a/TrackToLearn/environments/gym/gym_env.py +++ /dev/null @@ -1,77 +0,0 @@ -import gym # open ai gym -import numpy as np - -from TrackToLearn.environments.env import BaseEnv - - -class GymWrapper(BaseEnv): - """ - Abstract tracking environment. - TODO: Add more explanations - """ - - def __init__( - self, - env_name: str, - n_envs: int, - device=None, - gamma=0.99, - seed=1337, - **kwargs, - ): - """ - Parameters - ---------- - - """ - self.n_envs = n_envs - self._inner_envs = [] - for i in range(self.n_envs): - env = gym.make(env_name, **kwargs) - env = gym.wrappers.ClipAction(env) - env = gym.wrappers.NormalizeObservation(env) - env = gym.wrappers.TransformObservation( - env, lambda obs: np.clip(obs, -10, 10)) - env = gym.wrappers.NormalizeReward(env, gamma=gamma) - env = gym.wrappers.TransformReward( - env, lambda reward: np.clip(reward, -10, 10)) - # env.seed(seed) - # env.action_space.seed(seed) - # env.observation_space.seed(seed) - self._inner_envs.append(env) - - self.dones = np.asarray([False] * self.n_envs) - - def reset(self): - states = np.asarray([ - self._inner_envs[i].reset()[0] for i in range(self.n_envs)]) - self.dones = np.asarray([False] * self.n_envs) - return states - - def step(self, action): - not_done = [not d for d in self.dones] - indices = np.asarray(range(self.n_envs)) - indices = indices[not_done] - ns, r, d, t, *_ = zip(*[ - self._inner_envs[j].step( - action[i]) for i, j in enumerate(indices)]) - - n_i, r_i, d_i, t_i = ( - np.asarray(ns), np.asarray(r), np.asarray(d), np.asarray(t)) - self.dones[indices[d_i]] = True - self.dones[indices[t_i]] = True - not_dones = [(not d) and (not t) for (d, t) in zip(d_i, t_i)] - self.continue_idx = np.arange(len(d_i))[not_dones] - return n_i, r_i, np.logical_or(d_i, t_i), {} - - def render(self, **kwargs): - self._inner_envs[0].render(**kwargs) - - def harvest(self, states, compress=False): - indices = np.asarray(range(self.n_envs)) - indices = indices[self.continue_idx] - states = states[self.continue_idx] - return states, indices - - def get_streamlines(self, compress=False): - return None diff --git a/TrackToLearn/environments/interface_tracking_env.py b/TrackToLearn/environments/interface_tracking_env.py deleted file mode 100644 index 9d6ffac..0000000 --- a/TrackToLearn/environments/interface_tracking_env.py +++ /dev/null @@ -1,107 +0,0 @@ -import numpy as np - -from typing import Tuple - -from TrackToLearn.environments.tracking_env import TrackingEnvironment -from TrackToLearn.environments.noisy_tracker import NoisyTrackingEnvironment -from TrackToLearn.utils.utils import normalize_vectors - - -class InterfaceTrackingEnvironment(TrackingEnvironment): - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done, and compute new - hidden states - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - # If the streamline goes out the tracking mask at the first - # step, flip it - if self.length == 1: - # Scale directions to step size - directions = normalize_vectors(directions) * self.step_size - - # Grow streamlines one step forward - streamlines = self.streamlines[self.continue_idx].copy() - streamlines[:, self.length, :] = \ - self.streamlines[self.continue_idx, - self.length-1, :] + directions - - # Get stopping and keeping indexes - stopping, flags = \ - self._is_stopping( - streamlines[:, :self.length + 1]) - - # Flip stopping trajectories - directions[stopping] *= -1 - - return super().step(directions) - - -class InterfaceNoisyTrackingEnvironment(NoisyTrackingEnvironment): - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done, and compute new - hidden states - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - # If the streamline goes out the tracking mask at the first - # step, flip it - if self.length == 1: - # Scale directions to step size - directions = normalize_vectors(directions) * self.step_size - - # Grow streamlines one step forward - streamlines = self.streamlines[self.continue_idx].copy() - streamlines[:, self.length, :] = \ - self.streamlines[self.continue_idx, - self.length-1, :] + directions - - # Get stopping and keeping indexes - stopping, flags = \ - self._is_stopping( - streamlines[:, :self.length + 1]) - - # Flip stopping trajectories - directions[stopping] *= -1 - - return super().step(directions) diff --git a/TrackToLearn/environments/interpolation.py b/TrackToLearn/environments/interpolation.py index 7bcf16c..6f11448 100644 --- a/TrackToLearn/environments/interpolation.py +++ b/TrackToLearn/environments/interpolation.py @@ -1,164 +1,26 @@ import numpy as np -import torch -from scipy.ndimage.interpolation import map_coordinates +# from numba import njit -def torch_trilinear_interpolation( - volume: torch.Tensor, - coords: torch.Tensor, -) -> torch.Tensor: - """Evaluates the data volume at given coordinates using trilinear - interpolation on a torch tensor. - - Interpolation is done using the device on which the volume is stored. - - Parameters - ---------- - volume : torch.Tensor with 3D or 4D shape - The input volume to interpolate from - coords : torch.Tensor with shape (N,3) - The coordinates where to interpolate - - Returns - ------- - output : torch.Tensor with shape (N, #modalities) - The list of interpolated values - - References - ---------- - [1] https://spie.org/samples/PM159.pdf - """ - # Get device, and make sure volume and coords are using the same one - assert volume.device == coords.device, "volume on device: {}; " \ - "coords on device: {}".format( - volume.device, - coords.device) - coords = coords.type(torch.float32) - volume = volume.type(torch.float32) - - device = volume.device - - B1_torch = torch.tensor([[1, 0, 0, 0, 0, 0, 0, 0], - [-1, 0, 0, 0, 1, 0, 0, 0], - [-1, 0, 1, 0, 0, 0, 0, 0], - [-1, 1, 0, 0, 0, 0, 0, 0], - [1, 0, -1, 0, -1, 0, 1, 0], - [1, -1, -1, 1, 0, 0, 0, 0], - [1, -1, 0, 0, -1, 1, 0, 0], - [-1, 1, 1, -1, 1, -1, -1, 1]], - dtype=torch.float32, device=device) - - idx_torch = torch.tensor([[0, 0, 0], - [0, 0, 1], - [0, 1, 0], - [0, 1, 1], - [1, 0, 0], - [1, 0, 1], - [1, 1, 0], - [1, 1, 1]], dtype=torch.float32, device=device) - - if volume.dim() <= 2 or volume.dim() >= 5: - raise ValueError("Volume must be 3D or 4D!") - - if volume.dim() == 3: - # torch needs indices to be cast to long - indices_unclipped = ( - coords[:, None, :] + idx_torch).reshape((-1, 3)).long() - - # Clip indices to make sure we don't go out-of-bounds - lower = torch.as_tensor([0, 0, 0]).to(device) - upper = (torch.as_tensor(volume.shape) - 1).to(device) - indices = torch.min(torch.max(indices_unclipped, lower), upper) - - # Fetch volume data at indices - P = volume[ - indices[:, 0], indices[:, 1], indices[:, 2] - ].reshape((coords.shape[0], -1)).t() - - d = coords - torch.floor(coords) - dx, dy, dz = d[:, 0], d[:, 1], d[:, 2] - Q1 = torch.stack([ - torch.ones_like(dx), dx, dy, dz, dx * dy, dy * dz, - dx * dz, dx * dy * dz], - dim=0) - output = torch.sum(P * torch.mm(B1_torch.t(), Q1), dim=0) - - return output - - if volume.dim() == 4: - # 8 coordinates of the corners of the cube, for each input coordinate - indices_unclipped = torch.floor( - coords[:, None, :] + idx_torch).reshape((-1, 3)).long() - - # Clip indices to make sure we don't go out-of-bounds - lower = torch.as_tensor([0, 0, 0], device=device) - upper = torch.as_tensor(volume.shape[:3], device=device) - 1 - indices = torch.min(torch.max(indices_unclipped, lower), upper) - - # Fetch volume data at indices - P = volume[indices[:, 0], indices[:, 1], indices[:, 2], :].reshape( - (coords.shape[0], 8, volume.shape[-1])) - - # Shift 0.5 because fODFs are centered ? - # coords = coords - 0.5 - d = coords - torch.floor(coords) - dx, dy, dz = d[:, 0], d[:, 1], d[:, 2] - Q1 = torch.stack([ - torch.ones_like(dx), dx, dy, dz, dx * dy, - dy * dz, dx * dz, dx * dy * dz], - dim=0) - output = torch.sum( - P * torch.mm(B1_torch.t(), Q1).t()[:, :, None], dim=1) - - return output.type(torch.float32) - - raise ValueError( - "There was a problem with the volume's number of dimensions!") - - -def interpolate_volume_at_coordinates( - volume: np.ndarray, +# @njit +def nearest_neighbor_interpolation( + volume: np.array([[[[]]]]), coords: np.ndarray, - mode: str = 'nearest', - order: int = 1, - cval: float = 0.0 ) -> np.ndarray: - """ Evaluates a 3D or 4D volume data at the given coordinates using - trilinear interpolation. - - Parameters - ---------- - volume : 3D array or 4D array - Data volume. - coords : ndarray of shape (N, 3) - 3D coordinates where to evaluate the volume data. - mode : str, optional - Points outside the boundaries of the input are filled according to the - given mode (‘constant’, ‘nearest’, ‘reflect’ or ‘wrap’). - Default is ‘nearest’. - ('constant' uses 0.0 as a points outside the boundary) - - Returns - ------- - output : 2D array - Values from volume. """ - # map_coordinates uses the center of the voxel, so should we shift to - # the corner? + """ + coords = coords + volume = volume + + if volume.ndim <= 3 or volume.ndim >= 5: + raise ValueError("Volume must be 4D!") - if volume.ndim <= 2 or volume.ndim >= 5: - raise ValueError("Volume must be 3D or 4D!") + indices_unclipped = np.round(coords).astype(np.int32) - if volume.ndim == 3: - return map_coordinates( - volume, coords.T, order=order, mode=mode, cval=cval) + # Clip indices to make sure we don't go out-of-bounds + upper = (np.asarray(volume.shape[:3]) - 1) + indices = np.clip(indices_unclipped, 0, upper).astype(int).T + output = volume[tuple(indices)] - if volume.ndim == 4: - D = volume.shape[-1] - values_4d = np.zeros((coords.shape[0], D)) - for i in range(volume.shape[-1]): - values_4d[:, i] = map_coordinates( - volume[..., i], coords.T, order=order, - mode=mode, cval=cval) - return values_4d + return output diff --git a/TrackToLearn/environments/local_reward.py b/TrackToLearn/environments/local_reward.py new file mode 100644 index 0000000..45a2136 --- /dev/null +++ b/TrackToLearn/environments/local_reward.py @@ -0,0 +1,107 @@ +import numpy as np + +from TrackToLearn.environments.interpolation import ( + nearest_neighbor_interpolation) +from TrackToLearn.datasets.utils import MRIDataVolume +from TrackToLearn.environments.reward import Reward +from TrackToLearn.utils.utils import normalize_vectors + + +class PeaksAlignmentReward(Reward): + + """ Reward streamlines based on their alignment with local peaks + and their past direction. + + Initially proposed in + Théberge, A., Desrosiers, C., Descoteaux, M., & Jodoin, P. M. (2021). + Track-to-learn: A general framework for tractography with deep + reinforcement learning. Medical Image Analysis, 72, 102093. + """ + + def __init__( + self, + peaks: MRIDataVolume, + ): + self.name = 'peaks_reward' + + self.peaks = peaks.data + + def __call__( + self, + streamlines: np.ndarray, + dones: np.ndarray + ): + """ + Parameters + ---------- + streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) + Streamline coordinates in voxel space + + Returns + ------- + rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) + Array containing the reward + """ + N, L, _ = streamlines.shape + + if streamlines.shape[1] < 2: + # Not enough segments to compute curvature + return np.ones(len(streamlines), dtype=np.uint8) + + X, Y, Z, P = self.peaks.shape + idx = streamlines[:, -2].astype(np.int32) + + # Get peaks at streamline end + v = nearest_neighbor_interpolation(self.peaks, idx) + + v = np.reshape(v, (N * 5, P // 5)) + + with np.errstate(divide='ignore', invalid='ignore'): + # # Normalize peaks + v = normalize_vectors(v) + + v = np.reshape(v, (N, 5, P // 5)) + # Zero NaNs + v = np.nan_to_num(v) + + # Get last streamline segments + + dirs = np.diff(streamlines, axis=1) + u = dirs[:, -1] + # Normalize segments + with np.errstate(divide='ignore', invalid='ignore'): + u = normalize_vectors(u) + + # Zero NaNs + u = np.nan_to_num(u) + + # Get do product between all peaks and last streamline segments + dot = np.einsum('ijk,ik->ij', v, u) + + dot = np.abs(dot) + + # Get alignment with the most aligned peak + rewards = np.amax(dot, axis=-1) + # rewards = np.abs(dot) + + factors = np.ones((N)) + + # Weight alignment with peaks with alignment to itself + if streamlines.shape[1] >= 3: + # Get previous to last segment + w = dirs[:, -2] + + # # Normalize segments + with np.errstate(divide='ignore', invalid='ignore'): + w = normalize_vectors(w) + + # # Zero NaNs + w = np.nan_to_num(w) + + # Calculate alignment between two segments + np.einsum('ik,ik->i', u, w, out=factors) + + # Penalize angle with last step + rewards *= factors + + return rewards diff --git a/TrackToLearn/environments/noisy_tracker.py b/TrackToLearn/environments/noisy_tracker.py deleted file mode 100644 index 2aadf35..0000000 --- a/TrackToLearn/environments/noisy_tracker.py +++ /dev/null @@ -1,222 +0,0 @@ -import nibabel as nib -import numpy as np - -from typing import Tuple - -from TrackToLearn.environments.backward_tracking_env import ( - BackwardTrackingEnvironment) -from TrackToLearn.environments.retracking_env import RetrackingEnvironment -from TrackToLearn.environments.tracking_env import TrackingEnvironment -from TrackToLearn.environments.utils import interpolate_volume_at_coordinates - - -class NoisyTrackingEnvironment(TrackingEnvironment): - - def __init__( - self, - input_volume, - tracking_mask, - target_mask, - seeding_mask, - peaks, - env_dto, - include_mask=None, - exclude_mask=None, - - ): - """ - Parameters - ---------- - env_dto: dict - Dict containing all arguments - """ - - super().__init__( - input_volume, - tracking_mask, - target_mask, - seeding_mask, - peaks, - env_dto, - include_mask, - exclude_mask) - - self.prob = env_dto['prob'] - self.fa_map = None - if env_dto['fa_map']: - self.fa_map = env_dto['fa_map'].data - self.max_action = 1. - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done, and compute new - hidden states - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - if self.fa_map is not None and self.prob > 0.: - idx = self.streamlines[self.continue_idx, - self.length-1].astype(np.int32) - - # Get peaks at streamline end - fa = interpolate_volume_at_coordinates( - self.fa_map, idx, mode='constant', order=0) - noise = ((1. - fa) * self.prob) - else: - noise = np.asarray([self.prob] * len(directions)) - - directions = ( - directions + self.rng.normal(np.zeros((3, 1)), noise).T) - return super().step(directions) - - -class NoisyRetrackingEnvironment(RetrackingEnvironment): - - def __init__( - self, - env, - env_dto, - ): - """ - Parameters - ---------- - env: BaseEnv - Forward env - env_dto: dict - Dict containing all arguments - """ - - super().__init__(env, env_dto) - - self.prob = env_dto['prob'] - self.fa_map = None - if env_dto['fa_map']: - self.fa_map = env_dto['fa_map'].data - self.max_action = 1. - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done, and compute new - hidden states - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - if self.fa_map is not None and self.prob > 0.: - idx = self.streamlines[self.continue_idx, - self.length-1].astype(np.int32) - - # Get peaks at streamline end - fa = interpolate_volume_at_coordinates( - self.fa_map, idx, mode='constant', order=0) - noise = ((1. - fa) * self.prob) - else: - noise = np.asarray([self.prob] * len(directions)) - - directions = ( - directions + self.rng.normal(np.zeros((3, 1)), noise).T) - return super().step(directions) - - -class BackwardNoisyTrackingEnvironment(BackwardTrackingEnvironment): - - def __init__( - self, - env, - env_dto, - ): - """ - Parameters - ---------- - env: BaseEnv - Forward env - env_dto: dict - Dict containing all arguments - """ - - super().__init__(env, env_dto) - - self.prob = env_dto['prob'] - self.fa_map = None - if env_dto['fa_map']: - self.fa_map = env_dto['fa_map'].data - self.max_action = 1. - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done. While tracking - has not surpassed half-streamlines, replace the tracking step - taken with the actual streamline segment - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - if self.fa_map is not None and self.prob > 0.: - idx = self.streamlines[:, self.length-1].astype(np.int32) - - # Use affine to map coordinates in mask space - indices_mask = nib.affines.apply_affine( - np.linalg.inv(self.affine_vox2mask), idx).astype(np.int32) - - # Get peaks at streamline end - fa = interpolate_volume_at_coordinates( - self.fa_map, indices_mask, mode='constant', order=0) - noise = ((1. - fa) * self.prob) - else: - noise = np.asarray([self.prob] * len(directions)) - - directions = ( - directions + self.rng.normal(np.zeros((3, 1)), noise).T) - return super().step(directions) diff --git a/TrackToLearn/environments/noisy_tracking_env.py b/TrackToLearn/environments/noisy_tracking_env.py new file mode 100644 index 0000000..da5b028 --- /dev/null +++ b/TrackToLearn/environments/noisy_tracking_env.py @@ -0,0 +1,77 @@ +import numpy as np + +from scipy.ndimage import map_coordinates, spline_filter +from typing import Tuple + +from TrackToLearn.environments.tracking_env import TrackingEnvironment + + +class NoisyTrackingEnvironment(TrackingEnvironment): + + def __init__( + self, + dataset_file: str, + split_id: str, + env_dto: dict, + ): + """ + Parameters + ---------- + dataset_file: str + Path to the dataset file + split_id: str + Split id + subjects: list + List of subjects + env_dto: dict + Dict containing all arguments + """ + + super().__init__(dataset_file, split_id, env_dto) + + self.noise = env_dto['noise'] + self.fa_map = None + if env_dto['fa_map']: + self.fa_map = spline_filter(env_dto['fa_map'].data, order=3) + self.max_action = 1. + + def step( + self, + actions: np.ndarray, + ) -> Tuple[np.ndarray, list, bool, dict]: + """ + Apply actions and grow streamlines for one step forward + Calculate rewards and if the tracking is done, and compute new + hidden states + + Parameters + ---------- + actions: np.ndarray + Actions applied to the state + + Returns + ------- + state: np.ndarray + New state + reward: list + Reward for the last step of the streamline + done: bool + Whether the episode is done + info: dict + """ + + directions = actions + + if self.fa_map is not None and self.noise > 0.: + idx = self.streamlines[self.continue_idx, + self.length-1].astype(np.int32) + + # Get FA at streamline end + fa = map_coordinates( + self.fa_map, idx.T - 0.5, prefilter=False) + noise = ((1. - fa) * self.noise) + else: + noise = self.rng.normal(0., self.noise, size=directions.shape) + directions = ( + directions + noise) + return super().step(directions) diff --git a/TrackToLearn/environments/oracle_reward.py b/TrackToLearn/environments/oracle_reward.py new file mode 100644 index 0000000..8f2aa39 --- /dev/null +++ b/TrackToLearn/environments/oracle_reward.py @@ -0,0 +1,93 @@ +import nibabel as nib +import numpy as np + +from dipy.io.stateful_tractogram import StatefulTractogram, Space, Tractogram + +from TrackToLearn.environments.reward import Reward + +from TrackToLearn.oracles.oracle import OracleSingleton + + +class OracleReward(Reward): + + """ Reward streamlines based on the predicted scores of an "Oracle". + A binary reward is given by the oracle at the end of tracking. + """ + + def __init__( + self, + checkpoint: str, + min_nb_steps: int, + reference: nib.Nifti1Image, + affine_vox2rasmm: np.ndarray, + device: str + ): + # Name for stats + self.name = 'oracle_reward' + # Minimum number of steps before giving reward + # Only useful for 'sparse' reward + self.min_nb_steps = min_nb_steps + # Checkpoint of the oracle, which contains weights and hyperparams. + if checkpoint: + self.checkpoint = checkpoint + # The oracle is declared as a singleton to prevent loading the + # weights in memory multiple times. + self.model = OracleSingleton(checkpoint, device) + else: + self.checkpoint = None + + self.reference = reference + self.affine_vox2rasmm = affine_vox2rasmm + + # Reference anat + self.device = device + + def reward(self, streamlines, dones): + """ + Parameters + ---------- + streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) + Streamline coordinates in voxel space + + Returns + ------- + rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) + Array containing the reward + """ + if not self.checkpoint: + return None + N = dones.shape[0] + reward = np.zeros((N)) + predictions = self.model.predict(streamlines) + # Double indexing to get the indexes. Don't forget you + # can't assign using double indexes as the first indexing + # will return a copy of the array. + idx = np.arange(N)[dones][predictions > 0.5] + # Assign the reward using the precomputed double indexes. + reward[idx] = 1.0 + return reward + + def __call__( + self, + streamlines: np.ndarray, + dones: np.ndarray, + ): + + N, L, P = streamlines.shape + if L > self.min_nb_steps and sum(dones.astype(int)) > 0: + + # Change ref of streamlines. This is weird on the ISMRM2015 + # dataset as the diff and anat are not in the same space, + # but it should be fine on other datasets. + tractogram = Tractogram( + streamlines=streamlines.copy()[dones]) + tractogram.apply_affine(self.affine_vox2rasmm) + sft = StatefulTractogram( + streamlines=tractogram.streamlines, + reference=self.reference, + space=Space.RASMM) + sft.to_vox() + sft.to_corner() + + return self.reward(sft.streamlines, dones) + return np.zeros((N)) diff --git a/TrackToLearn/environments/retracking_env.py b/TrackToLearn/environments/retracking_env.py deleted file mode 100644 index 5000452..0000000 --- a/TrackToLearn/environments/retracking_env.py +++ /dev/null @@ -1,320 +0,0 @@ -import functools -import numpy as np -import torch - -from typing import Tuple - -from TrackToLearn.datasets.utils import ( - convert_length_mm2vox, -) - -from TrackToLearn.environments.reward import Reward -from TrackToLearn.environments.tracking_env import TrackingEnvironment - -from TrackToLearn.environments.stopping_criteria import ( - BinaryStoppingCriterion, - CmcStoppingCriterion, - StoppingFlags) - -from TrackToLearn.environments.utils import ( - get_neighborhood_directions, - is_too_curvy, - is_too_long) - -from TrackToLearn.utils.utils import normalize_vectors - - -class RetrackingEnvironment(TrackingEnvironment): - """ Pre-initialized environment - Tracking will start from the end of streamlines for two reasons: - - For computational purposes, it's easier if all streamlines have - the same length and are harvested as they end - - Tracking back the streamline and computing the alignment allows some - sort of "self-supervised" learning for tracking backwards - """ - def __init__(self, env: TrackingEnvironment, env_dto: dict): - - # Volumes and masks - self.reference = env.reference - self.affine_vox2rasmm = env.affine_vox2rasmm - self.affine_rasmm2vox = env.affine_rasmm2vox - - self.data_volume = env.data_volume - self.tracking_mask = env.tracking_mask - self.target_mask = env.target_mask - self.include_mask = env.include_mask - self.exclude_mask = env.exclude_mask - self.peaks = env.peaks - - self.normalize_obs = False # env_dto['normalize'] - self.obs_rms = None - - self._state_size = None # to be calculated later - - # Tracking parameters - self.n_signal = env_dto['n_signal'] - self.n_dirs = env_dto['n_dirs'] - self.theta = theta = env_dto['theta'] - self.cmc = env_dto['cmc'] - self.asymmetric = env_dto['asymmetric'] - - step_size_mm = env_dto['step_size'] - min_length_mm = env_dto['min_length'] - max_length_mm = env_dto['max_length'] - add_neighborhood_mm = env_dto['add_neighborhood'] - - # Reward parameters - self.alignment_weighting = env_dto['alignment_weighting'] - self.straightness_weighting = env_dto['straightness_weighting'] - self.length_weighting = env_dto['length_weighting'] - self.target_bonus_factor = env_dto['target_bonus_factor'] - self.exclude_penalty_factor = env_dto['exclude_penalty_factor'] - self.angle_penalty_factor = env_dto['angle_penalty_factor'] - self.compute_reward = env_dto['compute_reward'] - self.scoring_data = env_dto['scoring_data'] - - self.rng = env_dto['rng'] - self.device = env_dto['device'] - - # Stopping criteria is a dictionary that maps `StoppingFlags` - # to functions that indicate whether streamlines should stop or not - self.stopping_criteria = {} - mask_data = env.tracking_mask.data.astype(np.uint8) - - self.step_size = convert_length_mm2vox( - step_size_mm, - self.affine_vox2rasmm) - self.min_length = min_length_mm - self.max_length = max_length_mm - - # Compute maximum length - self.max_nb_steps = int(self.max_length / step_size_mm) - self.min_nb_steps = int(self.min_length / step_size_mm) - - if self.compute_reward: - self.reward_function = Reward( - peaks=self.peaks, - exclude=self.exclude_mask, - target=self.target_mask, - max_nb_steps=self.max_nb_steps, - theta=self.theta, - min_nb_steps=self.min_nb_steps, - asymmetric=self.asymmetric, - alignment_weighting=self.alignment_weighting, - straightness_weighting=self.straightness_weighting, - length_weighting=self.length_weighting, - target_bonus_factor=self.target_bonus_factor, - exclude_penalty_factor=self.exclude_penalty_factor, - angle_penalty_factor=self.angle_penalty_factor, - scoring_data=self.scoring_data, - reference=env.reference) - - self.stopping_criteria[StoppingFlags.STOPPING_LENGTH] = \ - functools.partial(is_too_long, - max_nb_steps=self.max_nb_steps) - - self.stopping_criteria[ - StoppingFlags.STOPPING_CURVATURE] = \ - functools.partial(is_too_curvy, max_theta=theta) - - if self.cmc: - cmc_criterion = CmcStoppingCriterion( - self.include_mask.data, - self.exclude_mask.data, - self.affine_vox2rasmm, - self.step_size, - self.min_nb_steps) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = cmc_criterion - else: - binary_criterion = BinaryStoppingCriterion( - mask_data, - 0.5) - self.stopping_criteria[StoppingFlags.STOPPING_MASK] = \ - binary_criterion - - # self.stopping_criteria[ - # StoppingFlags.STOPPING_LOOP] = \ - # functools.partial(is_looping, - # loop_threshold=300) - - # Convert neighborhood to voxel space - self.add_neighborhood_vox = None - if add_neighborhood_mm: - self.add_neighborhood_vox = convert_length_mm2vox( - add_neighborhood_mm, - self.affine_vox2rasmm) - self.neighborhood_directions = torch.tensor( - get_neighborhood_directions( - radius=self.add_neighborhood_vox), - dtype=torch.float16).to(self.device) - - @classmethod - def from_env( - cls, - env_dto: dict, - env: TrackingEnvironment, - ): - """ Initialize the environment from a `forward` environment. - """ - return cls(env, env_dto) - - def _is_stopping( - self, - streamlines: np.ndarray, - is_still_initializing: np.ndarray - ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: - """ Check which streamlines should stop or not according to the - predefined stopping criteria. An additional check is performed - to prevent stopping if the retracking process is not over. - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamlines that will be checked - is_still_initializing: `numpy.ndarray` of shape (n_streamlines) - Mask that indicates which streamlines are still being - retracked. - - Returns - ------- - stopping: `numpy.ndarray` - Mask of stopping streamlines. - stopping_flags : `numpy.ndarray` - `StoppingFlags` that triggered stopping for each stopping - streamline. - """ - stopping, flags = super()._is_stopping(streamlines) - - # Streamlines that haven't finished initializing should keep going - stopping[is_still_initializing[self.continue_idx]] = False - flags[is_still_initializing[self.continue_idx]] = 0 - - return stopping, flags - - def reset(self, half_streamlines: np.ndarray) -> np.ndarray: - """ Initialize tracking from half-streamlines. - - Parameters - ---------- - half_streamlines: np.ndarray - Half-streamlines to initialize environment - - Returns - ------- - state: numpy.ndarray - Initial state for RL model - """ - - # Half-streamlines - self.initial_points = np.array([s[0] for s in half_streamlines]) - - # Number if initialization steps for each streamline - self.n_init_steps = np.asarray(list(map(len, half_streamlines))) - - N = len(self.n_init_steps) - - # Get the first point of each seed as the start of the new streamlines - self.streamlines = np.zeros( - (N, self.max_nb_steps, 3), - dtype=np.float32) - - for i, (s, l) in enumerate(zip(half_streamlines, self.n_init_steps)): - self.streamlines[i, :l, :] = s[::-1] - - self.seeding_streamlines = self.streamlines.copy() - - self.lengths = np.ones(N, dtype=np.int32) - self.length = 1 - - # Done flags for tracking backwards - self.flags = np.zeros(N, dtype=int) - self.dones = np.full(N, False) - self.continue_idx = np.arange(N) - - # Signal - return self._format_state( - self.streamlines[self.continue_idx, :self.length]) - - def step( - self, - directions: np.ndarray, - ) -> Tuple[np.ndarray, list, bool, dict]: - """ - Apply actions and grow streamlines for one step forward - Calculate rewards and if the tracking is done. While tracking - has not surpassed half-streamlines, replace the tracking step - taken with the actual streamline segment. - - Parameters - ---------- - directions: np.ndarray - Actions applied to the state - - Returns - ------- - state: np.ndarray - New state - reward: list - Reward for the last step of the streamline - done: bool - Whether the episode is done - info: dict - """ - - # Scale directions to step size - directions = normalize_vectors(directions) * self.step_size - - # Grow streamlines one step forward - self.streamlines[self.continue_idx, self.length, - :] = self.streamlines[ - self.continue_idx, self.length-1, :] + directions - self.length += 1 - - # Check which streamline are still being retracked - is_still_initializing = self.n_init_steps > self.length + 1 - - # Get stopping and keeping indexes - # self._is_stopping is overridden to take into account retracking - stopping, new_flags = self._is_stopping( - self.streamlines[self.continue_idx, :self.length], - is_still_initializing) - - self.new_continue_idx, self.stopping_idx = ( - self.continue_idx[~stopping], - self.continue_idx[stopping]) - - mask_continue = np.in1d( - self.continue_idx, self.new_continue_idx, assume_unique=True) - diff_stopping_idx = np.arange( - len(self.continue_idx))[~mask_continue] - - # Set "done" flags for RL - self.dones[self.stopping_idx] = 1 - - # Store stopping flags - self.flags[ - self.stopping_idx] = new_flags[diff_stopping_idx] - - # Compute reward - reward = np.zeros(self.streamlines.shape[0]) - if self.compute_reward: - # Reward streamline step - reward = self.reward_function( - self.streamlines[self.continue_idx, :self.length, :], - self.dones[self.continue_idx]) - - # If a streamline is still being retracked - if np.any(is_still_initializing): - # Replace the last point of the predicted streamlines with - # the seeding streamlines at the same position - - self.streamlines[is_still_initializing, self.length - 1] = \ - self.seeding_streamlines[is_still_initializing, - self.length - 1] - - # Return relevant infos - return ( - self._format_state( - self.streamlines[self.continue_idx, :self.length]), - reward, self.dones[self.continue_idx], - {'continue_idx': self.continue_idx}) diff --git a/TrackToLearn/environments/reward.py b/TrackToLearn/environments/reward.py index 9efeab5..ac34add 100644 --- a/TrackToLearn/environments/reward.py +++ b/TrackToLearn/environments/reward.py @@ -1,500 +1,86 @@ -import nibabel as nib import numpy as np -import functools -import os - -from challenge_scoring.utils.attributes import load_attribs -from dipy.io.stateful_tractogram import Space, StatefulTractogram -from nibabel.streamlines import Tractogram - -from TrackToLearn.environments.score import ( - score_tractogram as score, _prepare_gt_bundles_info) -from TrackToLearn.environments.utils import ( - interpolate_volume_at_coordinates, - is_inside_mask, - is_too_curvy) -from TrackToLearn.datasets.utils import ( - MRIDataVolume) -from TrackToLearn.utils.utils import ( - normalize_vectors) class Reward(object): - def __init__( + """ Abstract function that all "rewards" must implement. + """ + + def __call__( self, - peaks: MRIDataVolume = None, - exclude: MRIDataVolume = None, - target: MRIDataVolume = None, - max_nb_steps: float = 200, - theta: float = 60, - min_nb_steps: float = 10, - asymmetric: bool = False, - alignment_weighting: float = 1.0, - straightness_weighting: float = 0.0, - length_weighting: float = 0.0, - target_bonus_factor: float = 0.0, - exclude_penalty_factor: float = 0.0, - angle_penalty_factor: float = 0.0, - scoring_data: str = None, - reference: str = None + streamlines: np.ndarray, + dones: np.ndarray ): - """ - peaks: `MRIDataVolume` - Volume containing the fODFs peaks - target_mask: MRIDataVolume - Mask representing the tracking endpoints - exclude_mask: MRIDataVolume - Mask representing the tracking no-go zones - max_len: `float` - Maximum lengths for the streamlines (in terms of points) - theta: `float` - Maximum degrees between two streamline segments - alignment_weighting: `float` - Coefficient for how much reward to give to the alignment - with peaks - straightness_weighting: `float` - Coefficient for how much reward to give to the alignment - with the last streamline segment - length_weighting: `float` - Coefficient for how much to reward the streamline for being - long - target_bonus_factor: `float` - Bonus for streamlines reaching the target mask - exclude_penalty_factor: `float` - Penalty for streamlines reaching the exclusion mask - angle_penalty_factor: `float` - Penalty for looping or too-curvy streamlines - """ + self.name = "Undefined" - print('Initializing reward with factors') - print({'alignment': alignment_weighting, - 'straightness': straightness_weighting, - 'length': length_weighting, - 'target': target_bonus_factor, - 'exclude_penalty_factor': exclude_penalty_factor, - 'angle_penalty_factor': angle_penalty_factor}) + assert False, "Not implemented" - self.peaks = peaks - self.exclude = exclude - self.target = target - self.max_nb_steps = max_nb_steps - self.theta = theta - self.min_nb_steps = min_nb_steps - self.asymmetric = asymmetric - self.alignment_weighting = alignment_weighting - self.straightness_weighting = straightness_weighting - self.length_weighting = length_weighting - self.target_bonus_factor = target_bonus_factor - self.exclude_penalty_factor = exclude_penalty_factor - self.angle_penalty_factor = angle_penalty_factor - self.scoring_data = scoring_data - self.reference = reference + def reset(self): + """ Most reward factors do not need to be reset. + """ + pass - # if self.scoring_data: - # print('WARNING: Rewarding from the Tractometer is not currently ' - # 'officially supported and may not work. If you do want to ' - # 'improve Track-to-Learn and make it work, I can happily ' - # 'help !') - # gt_bundles_attribs_path = os.path.join( - # self.scoring_data, - # 'gt_bundles_attributes.json') +class RewardFunction(): - # basic_bundles_attribs = load_attribs(gt_bundles_attribs_path) + """ Compute the reward function as the sum of its weighted factors. + Each factor may reward streamlines "densely" (i.e. at every step) or + "sparsely" (i.e. once per streamline). - # # Prepare needed scoring data - # masks_dir = os.path.join(self.scoring_data, "masks") - # rois_dir = os.path.join(masks_dir, "rois") - # bundles_dir = os.path.join(self.scoring_data, "bundles") - # bundles_masks_dir = os.path.join(masks_dir, "bundles") - # ref_anat_fname = os.path.join(masks_dir, "wm.nii.gz") + """ - # ROIs = [nib.load(os.path.join(rois_dir, f)) - # for f in sorted(os.listdir(rois_dir))] + def __init__( + self, + factors, + weights, + ): + """ + """ + assert len(factors) == len(weights) - # # Get the dict with 'name', 'threshold', 'streamlines', - # # 'cluster_map' and 'mask' for each bundle. - # ref_bundles = _prepare_gt_bundles_info(bundles_dir, - # bundles_masks_dir, - # basic_bundles_attribs, - # ref_anat_fname) + self.factors = factors + self.weights = weights - # self.scoring_function = functools.partial( - # score, - # ref_bundles=ref_bundles, - # ROIs=ROIs, - # compute_ic_ib=False) + self.F = len(self.factors) def __call__(self, streamlines, dones): """ - Compute rewards for the last step of the streamlines Each reward component is weighted according to a - coefficient + coefficient and then summed. Parameters ---------- streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) Streamline coordinates in voxel space + dones: `numpy.ndarray` of shape (n_streamlines) + Whether tracking is over for each streamline or not. Returns ------- - rewards: `float` + rewards: np.ndarray of floats Reward components weighted by their coefficients as well as the penalites """ N = len(streamlines) - length = reward_length(streamlines, self.max_nb_steps) \ - if self.length_weighting > 0. else np.zeros((N), dtype=np.uint8) - alignment = reward_alignment_with_peaks( - streamlines, self.peaks.data, self.asymmetric) \ - if self.alignment_weighting > 0 else np.zeros((N), dtype=np.uint8) - straightness = reward_straightness(streamlines) \ - if self.straightness_weighting > 0 else \ - np.zeros((N), dtype=np.uint8) - - weights = np.asarray([ - self.alignment_weighting, self.straightness_weighting, - self.length_weighting]) - params = np.stack((alignment, straightness, length)) - rewards = np.dot(params.T, weights) + rewards_factors = np.zeros((self.F, N)) - # Penalize sharp turns - if self.angle_penalty_factor > 0.: - rewards += penalize_sharp_turns( - streamlines, self.theta, self.angle_penalty_factor) + for i, (w, f) in enumerate(zip(self.weights, self.factors)): + if w > 0: + rewards_factors[i] = w * f(streamlines, dones) - # Penalize streamlines ending in exclusion mask - if self.exclude_penalty_factor > 0.: - rewards += penalize_exclude( - streamlines, - self.exclude.data, - self.exclude_penalty_factor) + info = {} + for i, f in enumerate(self.factors): + info[f.name] = np.mean(rewards_factors[i]) - # Reward streamlines ending in target mask - if self.target_bonus_factor > 0.: - rewards += self.reward_target( - streamlines, - dones) + reward = np.sum(rewards_factors, axis=0) - return rewards + return reward, info - def reward_target( - self, - streamlines: np.ndarray, - dones: np.ndarray, - ): - """ Reward streamlines if they end up in the GM - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - target: np.ndarray - Grey matter mask - penalty_factor: `float` - Penalty for streamlines ending in target mask - Should be >= 0 - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward + def reset(self): """ - target_streamlines = is_inside_mask( - streamlines, self.target.data, 0.5 - ) * self.target_bonus_factor - - reward = target_streamlines * dones * int( - streamlines.shape[1] > self.min_nb_steps) - - return reward - - def reward_tractometer( - self, - streamlines: np.ndarray, - dones: np.ndarray, - ): - """ Reward streamlines if the Tractometer marks them as valid. - - **WARNING**: This function is not supported and may not work. I - wrote it as part of some experimentation and I forgot to remove it - when releasing the code. Let me know if you want help making this - work. - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - target: np.ndarray - Grey matter mask - penalty_factor: `float` - Penalty for streamlines ending in target mask - Should be >= 0 - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward """ - # Get boolean array of streamlines ending in mask * penalty - if streamlines.shape[1] >= self.min_nb_steps and np.any(dones): - # Should the SFT be moved to RASMM space for scoring ? To corner - # or to center ? - sft = StatefulTractogram(streamlines, self.reference, Space.VOX) - to_score = np.arange(len(sft))[dones] - sub_sft = sft[to_score] - VC, IC, NC = self.scoring_function(sub_sft) - - # The factors for positively and negatively rewarding streamlines - # as well as which to apply positive, negative or no reward is - # open for improvements. I have not thoroughly tested anything. - - reward = np.zeros((streamlines.shape[0])) - if len(VC) > 0: - reward[to_score[VC]] += self.target_bonus_factor - # Display which streamlines are positively rewarded - # self.render(self.peaks, streamlines[to_score[VC]], - # reward[to_score[VC]]) - if len(IC) > 0: - reward[to_score[IC]] -= self.target_bonus_factor - if len(NC) > 0: - reward[to_score[NC]] -= self.target_bonus_factor - else: - reward = np.zeros((streamlines.shape[0])) - return reward - - def render( - self, - peaks, - streamlines, - rewards - ): - """ Debug function - - Parameters: - ----------- - tractogram: Tractogram, optional - Object containing the streamlines and seeds - path: str, optional - If set, save the image at the specified location instead - of displaying directly - """ - from fury import window, actor - # Might be rendering from outside the environment - tractogram = Tractogram( - streamlines=streamlines, - data_per_streamline={ - 'seeds': streamlines[:, 0, :] - }) - - # Reshape peaks for displaying - X, Y, Z, M = peaks.data.shape - peaks = np.reshape(peaks.data, (X, Y, Z, 5, M//5)) - - # Setup scene and actors - scene = window.Scene() - - stream_actor = actor.streamtube(tractogram.streamlines, rewards) - peak_actor = actor.peak_slicer(peaks, - np.ones((X, Y, Z, M)), - colors=(0.2, 0.2, 1.), - opacity=0.5) - mask_actor = actor.contour_from_roi( - self.target.data) - - dot_actor = actor.dots(tractogram.data_per_streamline['seeds'], - color=(1, 1, 1), - opacity=1, - dot_size=2.5) - scene.add(stream_actor) - scene.add(peak_actor) - scene.add(dot_actor) - scene.add(mask_actor) - scene.reset_camera_tight(0.95) - - showm = window.ShowManager(scene, reset_camera=True) - showm.initialize() - showm.start() - - -def penalize_exclude(streamlines, exclude, penalty_factor): - """ Penalize streamlines if they loop - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - exclude: np.ndarray - CSF matter mask - penalty_factor: `float` - Penalty for streamlines ending in exclusion mask - Should be <= 0 - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward - """ - return \ - is_inside_mask( - streamlines, exclude, 0.5) * -penalty_factor - - -def penalize_sharp_turns(streamlines, theta, penalty_factor): - """ Penalize streamlines if they curve too much - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - theta: `float` - Maximum angle between streamline steps - penalty_factor: `float` - Penalty for looping or too-curvy streamlines - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward - """ - return is_too_curvy(streamlines, theta) * -penalty_factor - - -def reward_length(streamlines, max_length): - """ Reward streamlines according to their length w.r.t the maximum length - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward - """ - N, S, _ = streamlines.shape - - rewards = np.asarray([S] * N) / max_length - - return rewards - - -def reward_alignment_with_peaks( - streamlines, peaks, asymmetric -): - """ Reward streamlines according to the alignment to their corresponding - fODFs peaks - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the reward - """ - N, L, _ = streamlines.shape - - if streamlines.shape[1] < 2: - # Not enough segments to compute curvature - return np.ones(len(streamlines), dtype=np.uint8) - - X, Y, Z, P = peaks.shape - idx = streamlines[:, -2].astype(np.int32) - - # Get peaks at streamline end - v = interpolate_volume_at_coordinates( - peaks, idx, mode='nearest', order=0) - - # Presume 5 peaks (per hemisphere if asymmetric) - if asymmetric: - v = np.reshape(v, (N, 5 * 2, P // (5 * 2))) - else: - v = np.reshape(v, (N, 5, P // 5)) - - with np.errstate(divide='ignore', invalid='ignore'): - # # Normalize peaks - v = normalize_vectors(v) - - # Zero NaNs - v = np.nan_to_num(v) - - # Get last streamline segments - - dirs = np.diff(streamlines, axis=1) - u = dirs[:, -1] - # Normalize segments - with np.errstate(divide='ignore', invalid='ignore'): - u = normalize_vectors(u) - - # Zero NaNs - u = np.nan_to_num(u) - - # Get do product between all peaks and last streamline segments - dot = np.einsum('ijk,ik->ij', v, u) - - if not asymmetric: - dot = np.abs(dot) - - # Get alignment with the most aligned peak - rewards = np.amax(dot, axis=-1) - # rewards = np.abs(dot) - - factors = np.ones((N)) - - # Weight alignment with peaks with alignment to itself - if streamlines.shape[1] >= 3: - # Get previous to last segment - w = dirs[:, -2] - - # # Normalize segments - with np.errstate(divide='ignore', invalid='ignore'): - w = normalize_vectors(w) - - # # Zero NaNs - w = np.nan_to_num(w) - - # Calculate alignment between two segments - np.einsum('ik,ik->i', u, w, out=factors) - - # Penalize angle with last step - rewards *= factors - - return rewards - - -def reward_straightness(streamlines): - """ Reward streamlines according to its sinuosity - - Distance between start and end of streamline / length - - A perfectly straight line has 1. - A circle would have 0. - - Parameters - ---------- - streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) - Streamline coordinates in voxel space - - Returns - ------- - rewards: 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array containing the angle between the last two segments -e """ - - N, S, _ = streamlines.shape - - start = streamlines[:, 0] - end = streamlines[:, -1] - - step_size = 1. - reward = np.linalg.norm(end - start, axis=1) / (S * step_size) - - return np.clip(reward + 0.5, 0, 1) + for f in self.factors: + f.reset() diff --git a/TrackToLearn/environments/score.py b/TrackToLearn/environments/score.py deleted file mode 100644 index 0333f0c..0000000 --- a/TrackToLearn/environments/score.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -from __future__ import division - -import os - -import nibabel as nib -import numpy as np - -from dipy.io.streamline import load_tractogram -from dipy.tracking.streamline import set_number_of_points -from dipy.segment.clustering import QuickBundles -from dipy.segment.metric import AveragePointwiseEuclideanMetric -from dipy.tracking.metrics import length as slength - -from challenge_scoring import NB_POINTS_RESAMPLE -from challenge_scoring.metrics.invalid_connections import group_and_assign_ibs -from challenge_scoring.metrics.valid_connections import auto_extract_VCs - - -def _prepare_gt_bundles_info(bundles_dir, bundles_masks_dir, - gt_bundles_attribs, ref_anat_fname): - """ - Returns - ------- - ref_bundles: list[dict] - Each dict will contain {'name': 'name_of_the_bundle', - 'threshold': thres_value, - 'streamlines': list_of_streamlines}, - 'cluster_map': the qb cluster map, - 'mask': the loaded bundle mask (nifti).} - """ - qb = QuickBundles(20, metric=AveragePointwiseEuclideanMetric()) - - ref_bundles = [] - - for bundle_idx, bundle_f in enumerate(sorted(os.listdir(bundles_dir))): - bundle_name = os.path.splitext(os.path.basename(bundle_f))[0] - - bundle_attribs = gt_bundles_attribs.get(os.path.basename(bundle_f)) - if bundle_attribs is None: - raise ValueError( - "Missing basic bundle attribs for {0}".format(bundle_f)) - - orig_sft = load_tractogram( - os.path.join(bundles_dir, bundle_f), ref_anat_fname, - bbox_valid_check=False, trk_header_check=False) - orig_sft.to_vox() - orig_sft.to_center() - - # Already resample to avoid doing it for each iteration of chunking - orig_strl = orig_sft.streamlines - - resamp_bundle = set_number_of_points(orig_strl, NB_POINTS_RESAMPLE) - resamp_bundle = [s.astype(np.float32) for s in resamp_bundle] - - bundle_cluster_map = qb.cluster(resamp_bundle) - bundle_cluster_map.refdata = resamp_bundle - - bundle_mask = nib.load(os.path.join(bundles_masks_dir, - bundle_name + '.nii.gz')) - - ref_bundles.append({'name': bundle_name, - 'threshold': bundle_attribs['cluster_threshold'], - 'cluster_map': bundle_cluster_map, - 'mask': bundle_mask}) - - return ref_bundles - - -def score_tractogram(sft, - ref_bundles=None, - ROIs=None, - compute_ic_ib: bool = False): - """ - Score a submission, using the following algorithm: - 1: extract all streamlines that are valid, which are classified as - Valid Connections (VC) making up Valid Bundles (VB). - 2: remove streamlines shorter than an threshold based on the GT dataset - 3: cluster the remaining streamlines - 4: remove singletons - 5: assign each cluster to the closest ROIs pair. Those make up the - Invalid Connections (IC), grouped as Invalid Bundles (IB). - 6: streamlines that are neither in VC nor IC are classified as - No Connection (NC). - - - Parameters - ------------ - streamlines_fname : string - path to the file containing the streamlines. - base_data_dir : string - path to the direction containing the scoring data. - basic_bundles_attribs : dictionary - contains the attributes of the basic bundles - (name, list of streamlines, segmentation threshold) - save_full_vc : bool - indicates if the full set of VC will be saved in an individual file. - save_full_ic : bool - indicates if the full set of IC will be saved in an individual file. - save_full_nc : bool - indicates if the full set of NC will be saved in an individual file. - compute_ic_ib: - segment IC results into IB. - save_IBs : bool - indicates if the invalid bundles will be saved in individual file for - each IB. - save_VBs : bool - indicates if the valid bundles will be saved in individual file for - each VB. - segmented_out_dir : string - the path to the directory where segmented files will be saved. - segmented_base_name : string - the base name to use for saving segmented files. - out_tract_type: str - extension for the output tractograms. - verbose : bool - indicates if the algorithm needs to be verbose when logging messages. - - Returns - --------- - scores : dict - dictionnary containing a score for each metric - """ - - sft.to_vox() - sft.to_center() - total_strl_count = len(sft.streamlines) - - # Extract VCs and VBs, compute OL, OR, f1 for each. - VC_indices, found_vbs_info = auto_extract_VCs(sft, ref_bundles) - VC = np.asarray(VC_indices, dtype=np.int32) - - candidate_ic_strl_indices = np.setdiff1d(range(total_strl_count), - VC_indices) - if compute_ic_ib: - - candidate_ic_indices = [] - rejected_indices = [] - - # Chosen from GT dataset - length_thres = 35. - - # Filter streamlines that are too short, consider them as NC - for idx in candidate_ic_strl_indices: - if slength(sft.streamlines[idx]) >= length_thres: - candidate_ic_indices.append(idx) - else: - rejected_indices.append(idx) - - ic_counts = 0 - nb_ib = 0 - - if len(candidate_ic_indices): - additional_rejected_indices, ic_counts, nb_ib = \ - group_and_assign_ibs(sft, candidate_ic_indices, ROIs, - False, False, '', - '', '', - 'tck') - - rejected_indices.extend(additional_rejected_indices) - - if ic_counts != len(candidate_ic_strl_indices) - len(rejected_indices): - raise ValueError("Some streamlines were not correctly assigned to " - "NC") - - IC = candidate_ic_strl_indices - else: - IC = [] - rejected_indices = candidate_ic_strl_indices - - # Converting np.float to floats for json dumps - NC = rejected_indices - - return VC, IC, NC diff --git a/TrackToLearn/environments/stopping_criteria.py b/TrackToLearn/environments/stopping_criteria.py index 07f1470..d34636d 100644 --- a/TrackToLearn/environments/stopping_criteria.py +++ b/TrackToLearn/environments/stopping_criteria.py @@ -1,11 +1,12 @@ -import numpy as np - from enum import Enum -from TrackToLearn.environments.utils import interpolate_volume_at_coordinates +import numpy as np +from dipy.io.stateful_tractogram import Space, StatefulTractogram, Tractogram +from scipy.ndimage import map_coordinates, spline_filter + +from TrackToLearn.oracles.oracle import OracleSingleton -# Flags enum class StoppingFlags(Enum): """ Predefined stopping flags to use when checking which streamlines should stop @@ -15,6 +16,8 @@ class StoppingFlags(Enum): STOPPING_CURVATURE = int('00000100', 2) STOPPING_TARGET = int('00001000', 2) STOPPING_LOOP = int('00010000', 2) + STOPPING_ANGULAR_ERROR = int('00100000', 2) + STOPPING_ORACLE = int('01000000', 2) def is_flag_set(flags, ref_flag): @@ -52,7 +55,8 @@ def __init__( Voxels with a value higher or equal than this threshold are considered as part of the interior of the mask. """ - self.mask = mask + self.mask = spline_filter( + np.ascontiguousarray(mask, dtype=float), order=3) self.threshold = threshold def __call__( @@ -72,54 +76,39 @@ def __call__( Array telling whether a streamline's last coordinate is outside the mask or not. """ + coords = streamlines[:, -1, :].T - 0.5 + return map_coordinates( + self.mask, coords, prefilter=False + ) < self.threshold + + +class OracleStoppingCriterion(object): + """ + Defines if a streamline should stop according to the oracle. - # Get last streamlines coordinates - return interpolate_volume_at_coordinates( - self.mask, streamlines[:, -1, :], mode='constant', - order=0) < self.threshold - - -class CmcStoppingCriterion(object): - """ Checks which streamlines should stop according to Continuous map - criteria. - Ref: - Girard, G., Whittingstall, K., Deriche, R., & Descoteaux, M. (2014) - Towards quantitative connectivity analysis: reducing tractography - biases. - Neuroimage, 98, 266-278. - - This is only in the partial-spirit of CMC. A good improvement (#TODO) - would be to include or exclude streamlines from the resulting - tractogram as well. Let me know if you need help in adding this - functionnality. """ def __init__( self, - include_mask: np.ndarray, - exclude_mask: np.ndarray, - affine: np.ndarray, - step_size: float, + checkpoint: str, min_nb_steps: int, + reference: str, + affine_vox2rasmm: np.ndarray, + device: str ): - """ - Parameters - ---------- - mask : 3D `numpy.ndarray` - 3D image defining a stopping mask. The interior of the mask is - defined by values higher or equal than `threshold` . - affine_vox2rasmm: `numpy.ndarray` with shape (4,4) (optional) - Tranformation that aligns brings streamlines to rasmm from vox. - threshold : float - Voxels with a value higher or equal than this threshold are - considered as part of the interior of the mask. - """ - self.include_mask = include_mask - self.exclude_mask = exclude_mask - self.affine = affine - vox_size = np.mean(np.abs(np.diag(affine)[:3])) - self.correction_factor = step_size / vox_size + + self.name = 'oracle_reward' + + if checkpoint: + self.checkpoint = checkpoint + self.model = OracleSingleton(checkpoint, device) + else: + self.checkpoint = None + + self.affine_vox2rasmm = affine_vox2rasmm + self.reference = reference self.min_nb_steps = min_nb_steps + self.device = device def __call__( self, @@ -130,40 +119,36 @@ def __call__( ---------- streamlines : `numpy.ndarray` of shape (n_streamlines, n_points, 3) Streamline coordinates in voxel space + Returns ------- - outside : 1D boolean `numpy.ndarray` of shape (n_streamlines,) - Array telling whether a streamline's last coordinate is outside the - mask or not. + dones: 1D boolean `numpy.ndarray` of shape (n_streamlines,) + Array indicating if streamlines are done. """ + if not self.checkpoint: + return None - include_result = interpolate_volume_at_coordinates( - self.include_mask, streamlines[:, -1, :], mode='constant', - order=1) - if streamlines.shape[1] < self.min_nb_steps: - include_result[:] = 0. - - exclude_result = interpolate_volume_at_coordinates( - self.exclude_mask, streamlines[:, -1, :], mode='constant', - order=1, cval=1.0) + # Resample streamlines to a fixed number of points. This should be + # set by the model ? TODO? + N, L, P = streamlines.shape + if L > self.min_nb_steps: - # If streamlines are still in 100% WM, don't exit - wm_points = include_result + exclude_result <= 0 + tractogram = Tractogram( + streamlines=streamlines.copy()) - # Compute continue probability - num = np.maximum(0, (1 - include_result - exclude_result)) - den = num + include_result + exclude_result - p = (num / den) ** self.correction_factor + tractogram.apply_affine(self.affine_vox2rasmm) - # p >= continue prob -> not continue - not_continue_points = np.random.random(streamlines.shape[0]) >= p + sft = StatefulTractogram( + streamlines=tractogram.streamlines, + reference=self.reference, + space=Space.RASMM) - # if by some magic some wm point don't continue, make them continue - not_continue_points[wm_points] = False + sft.to_vox() + sft.to_corner() + predictions = self.model.predict(sft.streamlines) - # if the point is in the include map, it has potentially reached GM - p = (include_result / (include_result + exclude_result)) - stop_include = np.random.random(streamlines.shape[0]) < p - not_continue_points[stop_include] = True + scores = np.zeros_like(predictions) + scores[predictions < 0.5] = 1 + return scores.astype(bool) - return not_continue_points + return np.array([False] * N) diff --git a/TrackToLearn/environments/tracking_env.py b/TrackToLearn/environments/tracking_env.py index 08c3b48..7988768 100644 --- a/TrackToLearn/environments/tracking_env.py +++ b/TrackToLearn/environments/tracking_env.py @@ -7,13 +7,14 @@ from TrackToLearn.environments.env import BaseEnv from TrackToLearn.environments.stopping_criteria import ( - is_flag_set, - StoppingFlags) -from TrackToLearn.utils.utils import normalize_vectors + is_flag_set, StoppingFlags) class TrackingEnvironment(BaseEnv): - """ Tracking environment. + """ Tracking environment. This environment is used to track + streamlines using a given model. Like the `BaseEnv`, it is + used as both an environment and a "tracker". + TODO: Clean up "_private functions" and public functions. Some could go into BaseEnv. """ @@ -39,37 +40,16 @@ def _is_stopping( streamline. """ stopping, flags = \ - self._filter_stopping_streamlines( + self._compute_stopping_flags( streamlines, self.stopping_criteria) return stopping, flags - def _keep( - self, - idx: np.ndarray, - state: np.ndarray, - ) -> np.ndarray: - """ Keep only states that correspond to continuing streamlines. - - Parameters - ---------- - idx : `np.ndarray` - Indices of the streamlines/states to keep - state: np.ndarray - Batch of states. - - Returns: - -------- - state: np.ndarray - Continuing states. - """ - state = state[idx] - - return state - def nreset(self, n_seeds: int) -> np.ndarray: """ Initialize tracking seeds and streamlines. Will chose N random seeds among all seeds. + TODO: Uniformize with `reset` function. + Parameters ---------- n_seeds: int @@ -81,6 +61,8 @@ def nreset(self, n_seeds: int) -> np.ndarray: Initial state for RL model """ + super().reset() + # Heuristic to avoid duplicating seeds if fewer seeds than actors. replace = n_seeds > len(self.seeds) seeds = np.random.choice( @@ -100,15 +82,18 @@ def nreset(self, n_seeds: int) -> np.ndarray: # Initialize rewards and done flags self.dones = np.full(n_seeds, False) self.continue_idx = np.arange(n_seeds) + self.state = self._format_state( + self.streamlines[self.continue_idx, :self.length]) # Setup input signal - return self._format_state( - self.streamlines[self.continue_idx, :self.length]) + return self.state[self.continue_idx] def reset(self, start: int, end: int) -> np.ndarray: """ Initialize tracking seeds and streamlines. Will select a given batch of seeds. + TODO: Uniformize with `nreset` function. + Parameters ---------- start: int @@ -121,6 +106,9 @@ def reset(self, start: int, end: int) -> np.ndarray: state: numpy.ndarray Initial state for RL model """ + + super().reset() + # Initialize seeds as streamlines self.initial_points = self.seeds[start:end] N = self.initial_points.shape[0] @@ -138,21 +126,25 @@ def reset(self, start: int, end: int) -> np.ndarray: self.dones = np.full(N, False) self.continue_idx = np.arange(N) - # Setup input signal - return self._format_state( + self.state = self._format_state( self.streamlines[self.continue_idx, :self.length]) + # Setup input signal + return self.state[self.continue_idx] + def step( self, - directions: np.ndarray, + actions: np.ndarray, ) -> Tuple[np.ndarray, list, bool, dict]: """ Apply actions, rescale actions to step size and grow streamlines for one step forward. Calculate rewards and stop streamlines. + TODO: Split into smaller functions. + Parameters ---------- - directions: np.ndarray + actions: np.ndarray Actions applied to the state Returns @@ -166,58 +158,73 @@ def step( info: dict """ - # Scale directions to step size - directions = normalize_vectors(directions) * self.step_size + directions = self._format_actions(actions) + + # If the streamline goes out the tracking mask at the first + # step, flip it + if self.length == 1: + # Grow streamlines one step forward + streamlines = np.array(self.streamlines[self.continue_idx]) + streamlines[:, self.length, :] = \ + self.streamlines[self.continue_idx, + self.length-1, :] + directions + + # Get stopping and keeping indexes + stopping, flags = \ + self._is_stopping( + streamlines[:, :self.length + 1]) + + # Flip stopping trajectories + directions[stopping] *= -1 # Grow streamlines one step forward self.streamlines[self.continue_idx, self.length, :] = \ self.streamlines[self.continue_idx, self.length-1, :] + directions self.length += 1 - # Get stopping and keeping indexes + # Get stopping and keeping indexes. stopping, new_flags = \ self._is_stopping( self.streamlines[self.continue_idx, :self.length]) + # See which trajectory is stopping or continuing. + # TODO: `investigate the use of `not_stopping`. + self.not_stopping = np.logical_not(stopping) self.new_continue_idx, self.stopping_idx = \ (self.continue_idx[~stopping], self.continue_idx[stopping]) - mask_continue = np.in1d( - self.continue_idx, self.new_continue_idx, assume_unique=True) - diff_stopping_idx = np.arange( - len(self.continue_idx))[~mask_continue] - + # Keep the reason why tracking stopped self.flags[ - self.stopping_idx] = new_flags[diff_stopping_idx] + self.stopping_idx] = new_flags[stopping] + # Keep which trajectory is over self.dones[self.stopping_idx] = 1 reward = np.zeros(self.streamlines.shape[0]) + reward_info = {} # Compute reward if wanted. At valid time, no need # to compute it and slow down the tracking process if self.compute_reward: - reward = self.reward_function( + reward, reward_info = self.reward_function( self.streamlines[self.continue_idx, :self.length], self.dones[self.continue_idx]) + # Compute the state + self.state[self.continue_idx] = self._format_state( + self.streamlines[self.continue_idx, :self.length]) + return ( - self._format_state( - self.streamlines[self.continue_idx, :self.length]), + self.state[self.continue_idx], reward, self.dones[self.continue_idx], - {'continue_idx': self.continue_idx}) + {'continue_idx': self.continue_idx, + 'reward_info': reward_info}) def harvest( self, - states: np.ndarray, ) -> Tuple[StatefulTractogram, np.ndarray]: - """Internally keep only the streamlines and corresponding env. states - that haven't stopped yet, and return the states that continue. - - Parameters - ---------- - states: torch.Tensor - States before "pruning" or "harvesting". + """Internally keep track of which trajectories are still going + and which aren't. Return the states accordingly. Returns ------- @@ -229,46 +236,52 @@ def harvest( # Register the length of the streamlines that have stopped. self.lengths[self.stopping_idx] = self.length - - mask_continue = np.in1d( - self.continue_idx, self.new_continue_idx, assume_unique=True) - diff_continue_idx = np.arange( - len(self.continue_idx))[mask_continue] + # Set new "continue idx" based on the old idxes. This is to keep + # the idxes "global". self.continue_idx = self.new_continue_idx + # Return the state corresponding to streamlines that are actually + # still being tracked. + # TODO: investigate why `not_stopping` is returned. + return self.state[self.continue_idx], self.not_stopping - # Keep only streamlines that should continue - states = self._keep( - diff_continue_idx, - states) - - return states, diff_continue_idx + def get_streamlines(self): + """ Obtain tracked streamlines from the environment. + The last point will be removed if it raised a curvature or mask + stopping criterion. - def get_streamlines(self) -> StatefulTractogram: - """ Obtain tracked streamlines fromm the environment. - The last point will be removed if it raised a curvature stopping - criteria (i.e. the angle was too high). Otherwise, other last points - are kept. - - TODO: remove them also ? + Parameters + ---------- Returns ------- tractogram: Tractogram - Tracked streamlines. + Tracked streamlines in voxel space. """ - - tractogram = Tractogram() # Harvest stopped streamlines and associated data # stopped_seeds = self.first_points[self.stopping_idx] - # Exclude last point as it triggered a stopping criteria. stopped_streamlines = [self.streamlines[i, :self.lengths[i], :] for i in range(len(self.streamlines))] - flags = is_flag_set( + # If the last point triggered a stopping criterion based on + # angle, remove it so as not to produce ugly kinked streamlines. + curvature_flags = is_flag_set( self.flags, StoppingFlags.STOPPING_CURVATURE) + + # Reduce overreach by removing the last point if it triggered + # a mask-based stopping criterion. + mask_flags = is_flag_set( + self.flags, StoppingFlags.STOPPING_MASK) + + # Remove the last point if it triggered one of these two flags. + flags = np.logical_or(curvature_flags, mask_flags) + + # IMPORTANT: The oracle will wildly + # overestimate the tractogram if the last point is not included + # since the last point (and segment) is what made it stop tracking. + # **Therefore** the last point should be included as much as possible. stopped_streamlines = [ - s[:-1] if f else s for f, s in zip(flags, stopped_streamlines)] + s[:-1] if f else s for s, f in zip(stopped_streamlines, flags)] stopped_seeds = self.initial_points @@ -276,7 +289,6 @@ def get_streamlines(self) -> StatefulTractogram: tractogram = Tractogram( streamlines=stopped_streamlines, data_per_streamline={"seeds": stopped_seeds, - }, - affine_to_rasmm=self.affine_vox2rasmm) + "flags": self.flags}) return tractogram diff --git a/TrackToLearn/environments/utils.py b/TrackToLearn/environments/utils.py index f5c5fe4..89e90a0 100644 --- a/TrackToLearn/environments/utils.py +++ b/TrackToLearn/environments/utils.py @@ -1,59 +1,11 @@ import numpy as np -import torch +from dipy.tracking import metrics as tm +from multiprocessing import Pool +from scipy.ndimage import map_coordinates -from TrackToLearn.environments.interpolation import ( - interpolate_volume_at_coordinates, - torch_trilinear_interpolation) from TrackToLearn.utils.utils import normalize_vectors -def get_sh( - segments, - data_volume, - add_neighborhood_vox, - neighborhood_directions, - history, - device -) -> np.ndarray: - """ Get the sh coefficients at the end of streamlines - """ - - N, H, P = segments.shape - flat_coords = np.reshape(segments, (N * H, P)) - - coords = torch.as_tensor(flat_coords).to(device) - n_coords = coords.shape[0] - - if add_neighborhood_vox: - # Extend the coords array with the neighborhood coordinates - coords = torch.repeat_interleave( - coords, - neighborhood_directions.size()[0], - axis=0) - - coords[:, :3] += \ - neighborhood_directions.repeat(n_coords, 1) - - # Evaluate signal as if all coords were independent - partial_signal = torch_trilinear_interpolation( - data_volume, coords) - - # Reshape signal into (n_coords, new_feature_size) - new_feature_size = partial_signal.size()[-1] * \ - neighborhood_directions.size()[0] - else: - partial_signal = torch_trilinear_interpolation( - data_volume, - coords).type(torch.float32) - new_feature_size = partial_signal.size()[-1] - - signal = torch.reshape(partial_signal, (N, history * new_feature_size)) - - assert len(signal.size()) == 2, signal.size() - - return signal - - def get_neighborhood_directions( radius: float ) -> np.ndarray: @@ -136,8 +88,9 @@ def is_inside_mask( or not. """ # Get last streamlines coordinates - return interpolate_volume_at_coordinates( - mask, streamlines[:, -1, :], mode='constant', order=0) >= threshold + return map_coordinates( + mask, streamlines[:, -1, :].T - 0.5, + mode='constant', order=0) >= threshold def is_outside_mask( @@ -166,8 +119,9 @@ def is_outside_mask( """ # Get last streamlines coordinates - return interpolate_volume_at_coordinates( - mask, streamlines[:, -1, :], mode='constant', order=0) < threshold + return map_coordinates( + mask, streamlines[:, -1, :].T - 0.5, mode='constant', order=0 + ) < threshold def is_too_long(streamlines: np.ndarray, max_nb_steps: int): @@ -208,15 +162,14 @@ def is_too_curvy(streamlines: np.ndarray, max_theta: float): max_theta_rad = np.deg2rad(max_theta) # Internally use radian if streamlines.shape[1] < 3: # Not enough segments to compute curvature - return np.zeros(streamlines.shape[0], dtype=np.uint8) + return np.zeros(streamlines.shape[0], dtype=bool) # Compute vectors for the last and before last streamline segments u = normalize_vectors(streamlines[:, -1] - streamlines[:, -2]) v = normalize_vectors(streamlines[:, -2] - streamlines[:, -3]) # Compute angles - angles = np.arccos(np.sum(u * v, axis=1).clip(-1., 1.)) - + angles = np.arccos(np.einsum('ij,ij->i', u, v)) return angles > max_theta_rad @@ -286,6 +239,44 @@ def is_looping(streamlines: np.ndarray, loop_threshold: float): Array telling whether a streamline is too curvy or not """ - angles = winding(streamlines) + clean_ids = remove_loops_and_sharp_turns( + streamlines, loop_threshold, num_processes=8) + mask = np.full(streamlines.shape[0], True) + mask[clean_ids] = False + return mask + + +def remove_loops_and_sharp_turns(streamlines, + max_angle, + num_processes=1): + """ + Remove loops and sharp turns from a list of streamlines. + Parameters + ---------- + streamlines: list of ndarray + The list of streamlines from which to remove loops and sharp turns. + max_angle: float + Maximal winding angle a streamline can have before + being classified as a loop. + use_qb: bool + Set to True if the additional QuickBundles pass is done. + This will help remove sharp turns. Should only be used on + bundled streamlines, not on whole-brain tractograms. + qb_threshold: float + Quickbundles distance threshold, only used if use_qb is True. + qb_seed: int + Seed to initialize randomness in QuickBundles + + Returns + ------- + list: the ids of clean streamlines + Only the ids are returned so proper filtering can be done afterwards + """ + + ids = [] + pool = Pool(num_processes) + windings = pool.map(tm.winding, streamlines) + pool.close() + ids = list(np.where(np.array(windings) < max_angle)[0]) - return angles > loop_threshold + return ids diff --git a/TrackToLearn/experiment/experiment.py b/TrackToLearn/experiment/experiment.py index c3aae1a..6d8204d 100644 --- a/TrackToLearn/experiment/experiment.py +++ b/TrackToLearn/experiment/experiment.py @@ -1,7 +1,24 @@ +import nibabel as nib +import numpy as np + +from argparse import ArgumentParser +from os.path import join as pjoin from typing import Tuple -from TrackToLearn.algorithms.rl import RLAlgorithm +from dipy.io.stateful_tractogram import Origin, Space, StatefulTractogram +from dipy.io.streamline import save_tractogram + +from nibabel.streamlines import Tractogram + from TrackToLearn.environments.env import BaseEnv +from TrackToLearn.environments.tracking_env import ( + TrackingEnvironment) +from TrackToLearn.environments.noisy_tracking_env import ( + NoisyTrackingEnvironment) +from TrackToLearn.environments.stopping_criteria import ( + is_flag_set, StoppingFlags) +from TrackToLearn.utils.utils import LossHistory +from TrackToLearn.utils.comet_monitor import CometMonitor class Experiment(object): @@ -14,155 +31,395 @@ def run(self): """ pass - def get_envs(self) -> Tuple[BaseEnv, BaseEnv]: + def setup_monitors(self): + # RL monitors + self.train_reward_monitor = LossHistory( + "Train Reward", "train_reward", self.experiment_path) + self.train_length_monitor = LossHistory( + "Train Length", "length_reward", self.experiment_path) + self.reward_monitor = LossHistory( + "Reward - Alignment", "reward", self.experiment_path) + self.actor_loss_monitor = LossHistory( + "Loss - Actor Policy Loss", "actor_loss", self.experiment_path) + self.critic_loss_monitor = LossHistory( + "Loss - Critic MSE Loss", "critic_loss", self.experiment_path) + self.len_monitor = LossHistory( + "Length", "length", self.experiment_path) + + # Tractometer monitors + # TODO: Infer the number of bundles from the GT + if self.tractometer_validator: + self.vc_monitor = LossHistory( + "Valid Connections", "vc", self.experiment_path) + self.ic_monitor = LossHistory( + "Invalid Connections", "ic", self.experiment_path) + self.nc_monitor = LossHistory( + "Non-Connections", "nc", self.experiment_path) + self.vb_monitor = LossHistory( + "Valid Bundles", "VB", self.experiment_path) + self.ib_monitor = LossHistory( + "Invalid Bundles", "IB", self.experiment_path) + self.ol_monitor = LossHistory( + "Overlap monitor", "ol", self.experiment_path) + + else: + self.vc_monitor = None + self.ic_monitor = None + self.nc_monitor = None + self.vb_monitor = None + self.ib_monitor = None + self.ol_monitor = None + + # Initialize monitors here as the first pass won't include losses + self.actor_loss_monitor.update(0) + self.actor_loss_monitor.end_epoch(0) + self.critic_loss_monitor.update(0) + self.critic_loss_monitor.end_epoch(0) + + def setup_comet(self, prefix=''): + """ Setup comet environment + """ + # The comet object that will handle monitors + self.comet_monitor = CometMonitor( + self.comet_experiment, self.name, self.experiment_path, + prefix) + print(self.hyperparameters) + self.comet_monitor.log_parameters(self.hyperparameters) + + def _get_env_dict_and_dto( + self, noisy + ) -> Tuple[dict, dict]: + """ Get the environment class and the environment DTO. + + Parameters + ---------- + noisy: bool + Whether to use the noisy environment or not. + + Returns + ------- + class_dict: dict + Dictionary of environment classes. + env_dto: dict + Dictionary of environment parameters. + """ + + env_dto = { + 'dataset_file': self.dataset_file, + 'fa_map': self.fa_map, + 'n_dirs': self.n_dirs, + 'step_size': self.step_size, + 'theta': self.theta, + 'min_length': self.min_length, + 'max_length': self.max_length, + 'noise': self.noise, + 'npv': self.npv, + 'rng': self.rng, + 'alignment_weighting': self.alignment_weighting, + 'oracle_bonus': self.oracle_bonus, + 'oracle_validator': self.oracle_validator, + 'oracle_stopping_criterion': self.oracle_stopping_criterion, + 'oracle_checkpoint': self.oracle_checkpoint, + 'scoring_data': self.scoring_data, + 'tractometer_validator': self.tractometer_validator, + 'binary_stopping_threshold': self.binary_stopping_threshold, + 'compute_reward': self.compute_reward, + 'device': self.device + } + + if noisy: + class_dict = { + 'tracking_env': NoisyTrackingEnvironment + } + else: + class_dict = { + 'tracking_env': TrackingEnvironment + } + return class_dict, env_dto + + def get_env(self) -> Tuple[BaseEnv, BaseEnv]: """ Build environments Returns: -------- - back_env: BaseEnv - Backward environment that will be pre-initialized - with half-streamlines env: BaseEnv "Forward" environment only initialized with seeds """ - pass - def get_valid_envs(self) -> Tuple[BaseEnv, BaseEnv]: + class_dict, env_dto = self._get_env_dict_and_dto(False) + + # Someone with better knowledge of design patterns could probably + # clean this + env = class_dict['tracking_env'].from_dataset( + env_dto, 'training') + + return env + + def get_valid_env(self) -> Tuple[BaseEnv, BaseEnv]: """ Build environments Returns: -------- - back_env: BaseEnv - Backward environment that will be pre-initialized - with half-streamlines env: BaseEnv "Forward" environment only initialized with seeds """ - # Not sure if parameters should come from `self` of actual - # function parameters. It feels a bit dirty to have everything - # in `self`, but then there's a crapload of parameters + class_dict, env_dto = self._get_env_dict_and_dto(True) - pass + # Someone with better knowledge of design patterns could probably + # clean this + env = class_dict['tracking_env'].from_dataset( + env_dto, 'training') + + return env + + def get_tracking_env(self): + """ Generate environments according to tracking parameters. + + Returns: + -------- + env: BaseEnv + "Forward" environment only initialized with seeds + """ + + class_dict, env_dto = self._get_env_dict_and_dto(True) + + # Update DTO to include indiv. files instead of hdf5 + env_dto.update({ + 'in_odf': self.in_odf, + 'wm_file': self.wm_file, + 'in_seed': self.in_seed, + 'in_mask': self.in_mask, + 'sh_basis': self.sh_basis, + 'input_wm': self.input_wm, + 'reference': self.reference_file, + # file instead of being passed directly. + }) + + # Someone with better knowledge of design patterns could probably + # clean this + env = class_dict['tracking_env'].from_files(env_dto) + + return env + + def stopping_stats(self, tractogram): + """ Compute stopping statistics for a tractogram. + + Parameters + ---------- + tractogram: Tractogram + Tractogram to compute statistics on. - def valid( + Returns + ------- + stats: dict + Dictionary of stopping statistics. + """ + # Compute stopping statistics + if tractogram is None: + return {} + # Stopping statistics are stored in the data_per_streamline + # dictionary + flags = tractogram.data_per_streamline['flags'] + stats = {} + # Compute the percentage of streamlines that have a given flag set + # for each flag + for f in StoppingFlags: + if len(flags) > 0: + set_pct = np.mean(is_flag_set(flags, f)) + else: + set_pct = 0 + stats.update({f.name: set_pct}) + return stats + + def score_tractogram(self, filename, env): + """ Score a tractogram using the tractometer or the oracle. + + Parameters + ---------- + filename: str + Filename of the tractogram to score. + + """ + # Dict of scores + all_scores = {} + + # Compute scores for the tractogram according + # to each validator. + for scorer in self.validators: + scores = scorer(filename, env) + all_scores.update(scores) + + return all_scores + + def save_rasmm_tractogram( self, - alg: RLAlgorithm, - env: BaseEnv, - back_env: BaseEnv, - save_model: bool = True, - ) -> Tuple[float]: + tractogram, + subject_id: str, + affine: np.ndarray, + reference: nib.Nifti1Image + ) -> str: """ - Run the tracking algorithm without noise to see how it performs + Saves a non-stateful tractogram from the training/validation + trackers. Parameters ---------- - alg: RLAlgorithm - Tracking algorithm that contains the being-trained policy - env: BaseEnv - Forward environment - back_env: BaseEnv - Backward environment - save_model: bool - Save the model or not + tractogram: Tractogram + Tractogram generated at validation time. Returns: -------- - tractogram: Tractogram - validation tractogram - reward: float - Reward obtained during validation + filename: str + Filename of the saved tractogram. """ - pass + # Save tractogram so it can be looked at, used by the tractometer + # and more + filename = pjoin( + self.experiment_path, "tractogram_{}_{}_{}.trk".format( + self.experiment, self.name, subject_id)) + + # Prune empty streamlines, keep only streamlines that have more + # than the seed. + indices = [i for (i, s) in enumerate(tractogram.streamlines) + if len(s) > 1] + + tractogram.apply_affine(affine) + + streamlines = tractogram.streamlines[indices] + data_per_streamline = tractogram.data_per_streamline[indices] + data_per_point = tractogram.data_per_point[indices] + + sft = StatefulTractogram( + streamlines, + reference, + Space.RASMM, + origin=Origin.TRACKVIS, + data_per_streamline=data_per_streamline, + data_per_point=data_per_point) + + sft.to_rasmm() + + save_tractogram(sft, filename, bbox_valid_check=False) - def display( + return filename + + def log( self, - env: BaseEnv, + valid_tractogram: Tractogram, valid_reward: float = 0, i_episode: int = 0, - run_tractometer: bool = False, + scores: dict = None, ): - pass + """ Print training infos and log metrics to Comet, if + activated. + + Parameters + ---------- + valid_tractogram: Tractogram + Tractogram generated at validation time. + valid_reward: float + Sum of rewards obtained during validation. + i_episode: int + ith training episode. + scores: dict + Scores as computed by the tractometer. + """ + if valid_tractogram: + lens = [len(s) for s in valid_tractogram.streamlines] + else: + lens = [0] + avg_valid_reward = valid_reward / len(lens) + avg_length = np.mean(lens) # Euclidian length + + print('---------------------------------------------------') + print(self.experiment_path) + print('Episode {} \t avg length: {} \t total reward: {}'.format( + i_episode, + avg_length, + avg_valid_reward)) + print('---------------------------------------------------') + + if scores is not None: + self.vc_monitor.update(scores['VC']) + self.ic_monitor.update(scores['IC']) + self.nc_monitor.update(scores['NC']) + self.vb_monitor.update(scores['VB']) + self.ib_monitor.update(scores['IB']) + self.ol_monitor.update(scores['mean_OL']) + + self.vc_monitor.end_epoch(i_episode) + self.ic_monitor.end_epoch(i_episode) + self.nc_monitor.end_epoch(i_episode) + self.vb_monitor.end_epoch(i_episode) + self.ib_monitor.end_epoch(i_episode) + self.ol_monitor.end_epoch(i_episode) + + # Update monitors + self.len_monitor.update(avg_length) + self.len_monitor.end_epoch(i_episode) + self.reward_monitor.update(avg_valid_reward) + self.reward_monitor.end_epoch(i_episode) -def add_experiment_args(parser): + if self.use_comet and self.comet_experiment is not None: + # Update comet + self.comet_monitor.update( + self.reward_monitor, + self.len_monitor, + self.vc_monitor, + self.ic_monitor, + self.nc_monitor, + self.vb_monitor, + self.ib_monitor, + self.ol_monitor, + i_episode=i_episode) + + +def add_experiment_args(parser: ArgumentParser): parser.add_argument('path', type=str, help='Path to experiment') parser.add_argument('experiment', help='Name of experiment.') parser.add_argument('id', type=str, help='ID of experiment.') - parser.add_argument('--use_gpu', action='store_true', - help='Use gpu or not') + parser.add_argument('--workspace', type=str, default='TractOracle', + help='Comet.ml workspace') parser.add_argument('--rng_seed', default=1337, type=int, help='Seed to fix general randomness') parser.add_argument('--use_comet', action='store_true', help='Use comet to display training or not') - parser.add_argument('--run_tractometer', action='store_true', - help='Run tractometer during validation to monitor' + - ' how the training is doing w.r.t. ground truth') - parser.add_argument('--render', action='store_true', - help='Save screenshots of tracking as it goes along.' + - 'Preferably disabled on non-graphical environments') -def add_data_args(parser): +def add_data_args(parser: ArgumentParser): parser.add_argument('dataset_file', help='Path to preprocessed dataset file (.hdf5)') - parser.add_argument('subject_id', - help='Subject id to fetch from the dataset file') - parser.add_argument('valid_dataset_file', - help='Path to preprocessed dataset file (.hdf5)') - parser.add_argument('valid_subject_id', - help='Subject id to fetch from the dataset file') - parser.add_argument('reference_file', - help='Path to reference anatomy (.nii.gz).') - parser.add_argument('scoring_data', - help='Path to Tractometer files.') -def add_environment_args(parser): - parser.add_argument('--n_signal', default=1, type=int, - help='Signal at the last n positions') +def add_environment_args(parser: ArgumentParser): parser.add_argument('--n_dirs', default=4, type=int, help='Last n steps taken') - parser.add_argument('--add_neighborhood', default=0.75, type=float, - help='Add neighborhood to model input') - parser.add_argument('--cmc', action='store_true', - help='If set, use Continuous Mask Criteria to stop' - 'tracking.') - parser.add_argument('--asymmetric', action='store_true', - help='If set, presume asymmetric fODFs when ' - 'computing reward.') + parser.add_argument( + '--binary_stopping_threshold', + type=float, default=0.1, + help='Lower limit for interpolation of tracking mask value.\n' + 'Tracking will stop below this threshold.') -def add_reward_args(parser): +def add_reward_args(parser: ArgumentParser): parser.add_argument('--alignment_weighting', default=1, type=float, help='Alignment weighting for reward') - parser.add_argument('--straightness_weighting', default=0, type=float, - help='Straightness weighting for reward') - parser.add_argument('--length_weighting', default=0, type=float, - help='Length weighting for reward') - parser.add_argument('--target_bonus_factor', default=0, type=float, - help='Bonus for streamlines reaching the target mask') - parser.add_argument('--exclude_penalty_factor', default=0, type=float, - help='Penalty for streamlines reaching the exclusion ' - 'mask') - parser.add_argument('--angle_penalty_factor', default=0, type=float, - help='Penalty for looping or too-curvy streamlines') - - -def add_model_args(parser): + + +def add_model_args(parser: ArgumentParser): parser.add_argument('--n_actor', default=4096, type=int, help='Number of learners') - parser.add_argument('--hidden_dims', default='1024-1024', type=str, + parser.add_argument('--hidden_dims', default='1024-1024-1024', type=str, help='Hidden layers of the model') - parser.add_argument('--load_policy', default=None, type=str, - help='Path to pretrained model') -def add_tracking_args(parser): +def add_tracking_args(parser: ArgumentParser): parser.add_argument('--npv', default=2, type=int, help='Number of random seeds per seeding mask voxel.') parser.add_argument('--theta', default=30, type=int, @@ -177,12 +434,36 @@ def add_tracking_args(parser): '[%(default)s]') parser.add_argument('--step_size', default=0.75, type=float, help='Step size for tracking') - parser.add_argument('--prob', default=0.0, type=float, metavar='sigma', - help='Add noise ~ N (0, `prob`) to the agent\'s\n' + parser.add_argument('--noise', default=0.0, type=float, metavar='sigma', + help='Add noise ~ N (0, `noise`) to the agent\'s\n' 'output to make tracking more probabilistic.\n' 'Should be between 0.0 and 0.1.' '[%(default)s]') - parser.add_argument('--interface_seeding', action='store_true', - help='If set, don\'t track "backwards"') - parser.add_argument('--no_retrack', action='store_true', - help='If set, don\'t retrack backwards') + + +def add_tractometer_args(parser: ArgumentParser): + tractom = parser.add_argument_group('Tractometer') + tractom.add_argument('--scoring_data', type=str, default=None, + help='Location of the tractometer scoring data.') + tractom.add_argument('--tractometer_reference', type=str, default=None, + help='Reference anatomy for the Tractometer.') + tractom.add_argument('--tractometer_validator', action='store_true', + help='Run tractometer during validation to monitor' + + ' how the training is doing w.r.t. ground truth.') + tractom.add_argument('--tractometer_dilate', default=1, type=int, + help='Dilation factor for the ROIs of the ' + 'Tractometer.') + + +def add_oracle_args(parser: ArgumentParser): + oracle = parser.add_argument_group('Oracle') + oracle.add_argument('--oracle_checkpoint', type=str, + default='models/tractoracle.ckpt', + help='Checkpoint file (.ckpt) of the Oracle') + oracle.add_argument('--oracle_validator', action='store_true', + help='Run a TractOracle model during validation to ' + 'monitor how the training is doing.') + oracle.add_argument('--oracle_stopping_criterion', action='store_true', + help='Stop streamlines according to the Oracle.') + oracle.add_argument('--oracle_bonus', default=10, type=float, + help='Sparse oracle weighting for reward.') diff --git a/TrackToLearn/experiment/oracle_validator.py b/TrackToLearn/experiment/oracle_validator.py new file mode 100644 index 0000000..41d3e94 --- /dev/null +++ b/TrackToLearn/experiment/oracle_validator.py @@ -0,0 +1,56 @@ +import numpy as np +from dipy.io.streamline import load_tractogram +from scilpy.tractanalysis.streamlines_metrics import compute_tract_counts_map + +from TrackToLearn.experiment.validators import Validator +from TrackToLearn.oracles.oracle import OracleSingleton + + +class OracleValidator(Validator): + + def __init__(self, checkpoint, device): + + self.name = 'Oracle' + + if checkpoint: + self.checkpoint = checkpoint + self.model = OracleSingleton(checkpoint, device) + else: + self.checkpoint = None + + self.device = device + + def __call__(self, filename, env): + + # Bbox check=False, TractoInferno volume may be cropped really tight + sft = load_tractogram(filename, env.reference, + bbox_valid_check=False, trk_header_check=True) + _, dimensions, _, _ = sft.space_attributes + wm_mask = env.tracking_mask.data + count = np.count_nonzero(wm_mask) + + sft.to_vox() + sft.to_corner() + + streamlines = sft.streamlines + + if len(streamlines) == 0: + return {} + + batch_size = 4096 + N = len(streamlines) + predictions = np.zeros((N)) + for i in range(0, N, batch_size): + + j = i + batch_size + scores = self.model.predict(streamlines[i:j]) + predictions[i:j] = scores + accuracy = (predictions > 0.5).astype(float) + + streamline_count = compute_tract_counts_map( + sft.streamlines[predictions > 0.5], dimensions) + + streamline_count[streamline_count > 0] = 1 + coverage = np.count_nonzero(streamline_count) + return {'Oracle': float(np.mean(accuracy)), + 'Coverage': float(coverage / count)} diff --git a/TrackToLearn/experiment/tracker.py b/TrackToLearn/experiment/tracker.py deleted file mode 100644 index d087b02..0000000 --- a/TrackToLearn/experiment/tracker.py +++ /dev/null @@ -1,250 +0,0 @@ -import numpy as np - -from collections import defaultdict -from tqdm import tqdm -from typing import Tuple - -from dipy.tracking.streamlinespeed import compress_streamlines -from nibabel.streamlines import Tractogram -from nibabel.streamlines.tractogram import LazyTractogram -from nibabel.streamlines.tractogram import TractogramItem - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.algorithms.shared.utils import add_to_means -from TrackToLearn.environments.env import BaseEnv - - -class Tracker(object): - """ Tracking class similar to scilpy's or dwi_ml's. This class is - responsible for generating streamlines, as well as giving back training - or RL-associated metrics if applicable. - """ - - def __init__( - self, - alg: RLAlgorithm, - env: BaseEnv, - back_env: BaseEnv, - n_actor: int, - interface_seeding: bool, - no_retrack: bool, - compress: float = 0.0, - save_seeds: bool = False - ): - """ - - Parameters - ---------- - alg: RLAlgorithm - Tracking agent. - env: BaseEnv - Forward environment to track. - back_env: BaseEnv - Backward environment to track. - compress: float - Compression factor when saving streamlines. - - """ - - self.alg = alg - self.env = env - self.back_env = back_env - self.n_actor = n_actor - self.interface_seeding = interface_seeding - self.no_retrack = no_retrack - self.compress = compress - self.save_seeds = save_seeds - - def track( - self, - ): - """ Actual tracking function. Use this if you just want streamlines. - - Track with a generator to save streamlines to file - as they are tracked. Used at tracking (test) time. No - reward should be computed. - - Returns: - -------- - tractogram: Tractogram - Tractogram in a generator format. - - """ - - # Presume iso vox - vox_size = abs(self.env.affine_vox2rasmm[0][0]) - - compress_th_vox = self.compress / vox_size - - batch_size = self.n_actor - - # Shuffle seeds so that massive tractograms wont load "sequentially" - # when partially displayed - np.random.shuffle(self.env.seeds) - - def tracking_generator(): - # Switch policy to eval mode so no gradients are computed - self.alg.policy.eval() - # Track for every seed in the environment - for i, start in enumerate( - tqdm(range(0, len(self.env.seeds), batch_size)) - ): - - # Last batch might not be "full" - end = min(start + batch_size, len(self.env.seeds)) - - state = self.env.reset(start, end) - - # Track forward - self.alg.validation_episode( - state, self.env) - batch_tractogram = self.env.get_streamlines() - - if not self.interface_seeding: - state = self.back_env.reset(batch_tractogram.streamlines) - - # Track backwards - self.alg.validation_episode( - state, self.back_env) - batch_tractogram = self.back_env.get_streamlines() - - for item in batch_tractogram: - - streamline_length = len(item) - - streamline = item.streamline - streamline += 0.5 - streamline *= vox_size - - seed_dict = {} - if self.save_seeds: - seed = item.data_for_streamline['seeds'] - seed_dict = {'seeds': seed-0.5} - - if self.compress: - streamline = compress_streamlines( - streamline, compress_th_vox) - - if (self.env.min_nb_steps < streamline_length < - self.env.max_nb_steps): - yield TractogramItem( - streamline, seed_dict, {}) - - tractogram = LazyTractogram.from_data_func(tracking_generator) - - return tractogram - - def track_and_train( - self, - ) -> Tuple[Tractogram, float, float, float]: - """ - Call the main training loop forward then backward. - This can be considered an "epoch". Note that N=self.n_actor - streamlines will be tracked instead of one streamline per seed. - - Returns - ------- - streamlines: Tractogram - Tractogram containing the tracked streamline - losses: dict - Dictionary containing various losses and metrics - w.r.t the agent's training. - running_reward: float - Cummulative training steps reward - """ - - self.alg.policy.train() - - mean_losses = defaultdict(list) - - # Fetch n=n_actor seeds - state = self.env.nreset(self.n_actor) - - # Track and train forward - reward, losses, length = \ - self.alg._episode(state, self.env) - # Get the streamlines generated from forward training - train_tractogram = self.env.get_streamlines() - - mean_losses = add_to_means(mean_losses, losses) - - if not self.interface_seeding: - # Flip streamlines to initialize backwards tracking - state = self.back_env.reset(train_tractogram.streamlines) - - # Track and train backwards - back_reward, losses, length = \ - self.alg._episode(state, self.back_env) - # Get the streamlines generated from backward training - train_tractogram = self.back_env.get_streamlines() - - mean_losses = add_to_means(mean_losses, losses) - - # Retracking also rewards the agents - if self.no_retrack: - reward += back_reward - else: - reward = back_reward - - return ( - train_tractogram, - mean_losses, - reward) - - def track_and_validate( - self, - ) -> Tuple[Tractogram, float, dict]: - """ - Run the tracking algorithm without training to see how it performs, but - still compute the reward. - - Returns: - -------- - tractogram: Tractogram - Validation tractogram. - reward: float - Reward obtained during validation. - """ - # Switch policy to eval mode so no gradients are computed - self.alg.policy.eval() - - # Initialize tractogram - tractogram = None - - # Reward gotten during validation - cummulative_reward = 0 - - def _generate_streamlines_and_rewards(): - - # Track for every seed in the environment - for i, start in enumerate( - tqdm(range(0, len(self.env.seeds), self.n_actor))): - - # Last batch might not be "full" - end = min(start + self.n_actor, len(self.env.seeds)) - - state = self.env.reset(start, end) - - # Track forward - reward = self.alg.validation_episode(state, self.env) - batch_tractogram = self.env.get_streamlines() - - if not self.interface_seeding: - # Initialize backwards tracking - state = self.back_env.reset(batch_tractogram.streamlines) - - # Track backwards - reward = self.alg.validation_episode( - state, self.back_env) - batch_tractogram = self.back_env.get_streamlines() - - yield batch_tractogram, reward - - for t, r in _generate_streamlines_and_rewards(): - if tractogram is None: - tractogram = t - else: - tractogram += t - cummulative_reward += r - - return tractogram, cummulative_reward diff --git a/TrackToLearn/experiment/tractometer_validator.py b/TrackToLearn/experiment/tractometer_validator.py new file mode 100644 index 0000000..47a1e08 --- /dev/null +++ b/TrackToLearn/experiment/tractometer_validator.py @@ -0,0 +1,401 @@ +import itertools +import json +import logging +import os +import tempfile +from collections import namedtuple + +import nibabel as nib +import numpy as np +from dipy.io.streamline import load_tractogram +from scilpy.io.image import get_data_as_mask +from scilpy.segment.tractogram_from_roi import segment_tractogram_from_roi +from scilpy.tractanalysis.scoring import compute_tractometry +from scilpy.tractanalysis.streamlines_metrics import compute_tract_counts_map +from scilpy.utils.filenames import split_name_with_nii + +from TrackToLearn.experiment.validators import Validator + +def_len = [0, np.inf] + + +def load_and_verify_everything( + reference, + gt_config, + gt_dir, + use_gt_masks_as_all_masks +): + """ + - Reads the config file + - Loads the masks / sft + - If endpoints were given instead of head + tail, separate into two + sub-rois. + - Verifies compatibility + """ + + # Read the config file + (bundle_names, gt_masks_files, all_masks_files, any_masks_files, + roi_options, lengths, angles, orientation_lengths, + abs_orientation_lengths) = read_config_file( + gt_config, gt_dir, use_gt_masks_as_all_masks) + + # Find every mandatory mask to be loaded + list_masks_files_r = list(itertools.chain( + *[list(roi_option.values()) for roi_option in roi_options])) + list_masks_files_o = gt_masks_files + all_masks_files + any_masks_files + # (This removes duplicates:) + list_masks_files_r = list(dict.fromkeys(list_masks_files_r)) + list_masks_files_o = list(dict.fromkeys(list_masks_files_o)) + + logging.info("Loading and/or computing ground-truth masks, limits " + "masks and any_masks.") + gt_masks = compute_masks_from_bundles(gt_masks_files, reference) + inv_all_masks = compute_masks_from_bundles(all_masks_files, reference, + inverse_mask=True) + any_masks = compute_masks_from_bundles(any_masks_files, reference) + + logging.info("Extracting ground-truth head and tail masks.") + gt_tails, gt_heads = compute_endpoint_masks(roi_options) + + # Update the list of every ROI, remove duplicates + list_rois = gt_tails + gt_heads + list_rois = list(dict.fromkeys(list_rois)) # Removes duplicates + + return (gt_tails, gt_heads, bundle_names, list_rois, + lengths, angles, orientation_lengths, abs_orientation_lengths, + inv_all_masks, gt_masks, any_masks) + + +def read_config_file( + gt_config, gt_dir='', use_gt_masks_as_all_masks=False +): + """ + Reads the gt_config file and returns: + + Returns + ------- + bundles: List + The names of each bundle. + gt_masks: List + The gt_mask filenames per bundle (None if not set) (used for + tractometry statistics). + all_masks: List + The all_masks filenames per bundles (None if not set). + any_masks: List + The any_masks filenames per bundles (None if not set). + roi_options: List + The roi_option dict per bundle. Keys are 'gt_head', 'gt_tail' if + they are set, else 'gt_endpoints'. + angles: List + The maximum angles per bundle (None if not set). + lengths: List + The [min max] lengths per bundle (None if not set). + orientation_length: List + The [[min_x, max_x], [min_y, max_y], [min_z, max_z]] per bundle. + (None they are all not set). + """ + angles = [] + lengths = [] + orientation_lengths = [] + abs_orientation_lengths = [] + gt_masks = [] + all_masks = [] + any_masks = [] + roi_options = [] + show_warning_gt = False + + with open(gt_config, "r") as json_file: + config = json.load(json_file) + + bundles = list(config.keys()) + for bundle in bundles: + bundle_config = config[bundle] + + if 'gt_mask' not in bundle_config: + show_warning_gt = True + if 'endpoints' not in bundle_config and \ + 'head' not in bundle_config: + raise ValueError( + "Bundle configuration for bundle {} misses 'endpoints' or " + "'head'/'tail'".format(bundle)) + + angle = length = None + length_x = length_y = length_z = None + length_x_abs = length_y_abs = length_z_abs = None + gt_mask = all_mask = any_mask = roi_option = None + + for key in bundle_config.keys(): + if key == 'angle': + angle = bundle_config['angle'] + elif key == 'length': + length = bundle_config['length'] + elif key == 'length_x': + length_x = bundle_config['length_x'] + elif key == 'length_y': + length_y = bundle_config['length_y'] + elif key == 'length_z': + length_z = bundle_config['length_z'] + elif key == 'length_x_abs': + length_x_abs = bundle_config['length_x_abs'] + elif key == 'length_y_abs': + length_y_abs = bundle_config['length_y_abs'] + elif key == 'length_z_abs': + length_z_abs = bundle_config['length_z_abs'] + elif key == 'gt_mask': + if gt_dir: + gt_mask = os.path.join(gt_dir, + bundle_config['gt_mask']) + else: + gt_mask = bundle_config['gt_mask'] + + if use_gt_masks_as_all_masks: + all_mask = gt_mask + elif key == 'all_mask': + if use_gt_masks_as_all_masks: + raise ValueError( + "With the option --use_gt_masks_as_all_masks, " + "you should not add any all_mask in the config " + "file.") + if gt_dir: + all_mask = os.path.join(gt_dir, + bundle_config['all_mask']) + else: + all_mask = bundle_config['all_mask'] + elif key == 'endpoints': + if 'head' in bundle_config or 'tail' in bundle_config: + raise ValueError( + "Bundle {} has confusing keywords in the config " + "file. Please choose either endpoints OR " + "head/tail.".format(bundle)) + if gt_dir: + endpoints = os.path.join(gt_dir, + bundle_config['endpoints']) + else: + endpoints = bundle_config['endpoints'] + roi_option = {'gt_endpoints': endpoints} + elif key == 'head': + if 'tail' not in bundle_config: + raise ValueError( + "You have provided the head for bundle {}, but " + "not the tail".format(bundle)) + if gt_dir: + head = os.path.join(gt_dir, bundle_config['head']) + tail = os.path.join(gt_dir, bundle_config['tail']) + else: + head = bundle_config['head'] + tail = bundle_config['tail'] + roi_option = {'gt_head': head, 'gt_tail': tail} + elif key == 'tail': + pass # dealt with at head + elif key == 'any_mask': + if gt_dir: + any_mask = os.path.join( + gt_dir, bundle_config['any_mask']) + else: + any_mask = bundle_config['any_mask'] + else: + raise ValueError("Unrecognized value {} in the config " + "file for bundle {}".format(key, bundle)) + + angles.append(angle) + lengths.append(length) + if length_x is None and length_y is None and length_z is None: + orientation_lengths.append(None) + else: + orientation_lengths.append( + [length_x if length_x is not None else def_len, + length_y if length_y is not None else def_len, + length_z if length_z is not None else def_len]) + + if length_x_abs is None and length_y_abs is None and \ + length_z_abs is None: + abs_orientation_lengths.append(None) + else: + abs_orientation_lengths.append( + [length_x_abs if length_x_abs is not None else def_len, + length_y_abs if length_y_abs is not None else def_len, + length_z_abs if length_z_abs is not None else def_len]) + gt_masks.append(gt_mask) + all_masks.append(all_mask) + any_masks.append(any_mask) + roi_options.append(roi_option) + + if show_warning_gt: + logging.info( + "At least one bundle had no gt_mask. Some tractometry metrics " + "won't be computed (OR, OL) for these bundles.") + + return (bundles, gt_masks, all_masks, any_masks, roi_options, + lengths, angles, orientation_lengths, abs_orientation_lengths) + + +def compute_endpoint_masks(roi_options): + """ + If endpoints without heads/tails are loaded, split them and continue + normally after. Q/C of the output is important. Compatibility between files + should be already verified. + + Parameters + ------ + roi_options: dict + Keys are the bundle names. For each bundle, the value is itself a + dictionary either key 'gt_endpoints' (the name of the file + containing the bundle's endpoints), or both keys 'gt_tail' and + 'gt_head' (the names of the respetive files). + out_dir: str + Where to save the heads and tails. + + Returns + ------- + tails, heads: lists of filenames with length the number of bundles. + """ + tails = [] + heads = [] + for bundle_options in roi_options: + tail = bundle_options['gt_tail'] + head = bundle_options['gt_head'] + + tails.append(tail) + heads.append(head) + + return tails, heads + + +def compute_masks_from_bundles(gt_files, reference, inverse_mask=False): + """ + Compute ground-truth masks. If the file is already a mask, load it. + If it is a bundle, compute the mask. If the filename is None, appends None + to the lists of masks. Compatibility between files should already be + verified. + + Parameters + ---------- + gt_files: list + List of either StatefulTractograms or niftis. + parser: ArgumentParser + Argument parser which handles the script's arguments. Used to print + parser errors, if any. + args: Namespace + List of arguments passed to the script. Used for its 'ref' and + 'bbox_check' arguments. + inverse_mask: bool + If true, returns the list of inversed masks instead. + + Returns + ------- + mask: list[numpy.ndarray] + The loaded masks. + """ + save_ref = reference + + gt_bundle_masks = [] + + for gt_bundle in gt_files: + if gt_bundle is not None: + # Support ground truth as streamlines or masks + # Will be converted to binary masks immediately + _, ext = split_name_with_nii(gt_bundle) + if ext in ['.gz', '.nii.gz']: + gt_img = nib.load(gt_bundle) + gt_mask = get_data_as_mask(gt_img) + dimensions = gt_mask.shape + else: + # Cheating ref because it may send a lot of warning if loading + # many trk with ref (reference was maybe added only for some + # of these files) + if ext == '.trk': + reference = 'same' + else: + reference = save_ref + gt_sft = load_tractogram( + gt_bundle, reference) + gt_sft.to_vox() + gt_sft.to_corner() + _, dimensions, _, _ = gt_sft.space_attributes + gt_mask = compute_tract_counts_map(gt_sft.streamlines, + dimensions).astype(np.int16) + gt_mask[gt_mask > 0] = 1 + + if inverse_mask: + gt_inv_mask = np.zeros(dimensions, dtype=np.int16) + gt_inv_mask[gt_mask == 0] = 1 + gt_mask = gt_inv_mask + else: + gt_mask = None + + gt_bundle_masks.append(gt_mask) + + return gt_bundle_masks + + +class TractometerValidator(Validator): + + def __init__( + self, + base_dir, + reference, + dilate_endpoints=1, + ): + + self.name = 'Tractometer' + + self.gt_config = os.path.join(base_dir, 'scil_scoring_config.json') + + self.gt_dir = base_dir + self.reference = reference + self.dilation_factor = dilate_endpoints + + # Load + (self.gt_tails, self.gt_heads, self.bundle_names, self.list_rois, + self.bundle_lengths, self.angles, self.orientation_lengths, + self.abs_orientation_lengths, self.inv_all_masks, self.gt_masks, + self.any_masks) = \ + load_and_verify_everything( + reference, + self.gt_config, + self.gt_dir, + False) + + def __call__(self, filename, env): + + logging.info("Loading tractogram.") + sft = load_tractogram(filename, env.reference, + bbox_valid_check=True, trk_header_check=True) + if len(sft.streamlines) == 0: + return {} + + _, dimensions, _, _ = sft.space_attributes + + args_mocker = namedtuple('args', [ + 'compute_ic', 'save_wpc_separately', 'unique', 'reference', + 'bbox_check', 'out_dir', 'dilate_endpoints', 'no_empty']) + + temp = tempfile.mkdtemp() + args = args_mocker( + False, False, True, self.reference, False, temp, + self.dilation_factor, False) + + # Segment VB, WPC, IB + (vb_sft_list, wpc_sft_list, ib_sft_list, nc_sft, + ib_names, bundle_stats) = segment_tractogram_from_roi( + sft, self.gt_tails, self.gt_heads, self.bundle_names, + self.bundle_lengths, self.angles, self.orientation_lengths, + self.abs_orientation_lengths, self.inv_all_masks, self.any_masks, + self.list_rois, args) + + # TODO: return bundle_stats + + # Tractometry on bundles + final_results = compute_tractometry( + vb_sft_list, wpc_sft_list, ib_sft_list, nc_sft, + args, self.bundle_names, self.gt_masks, dimensions, ib_names) + + relevant_results = {'VC': final_results['VS_ratio'], + 'IC': final_results.get('IC_ratio', 0), + 'IS': final_results.get('IS_ratio', 0), + 'NC': final_results.get('NC_ratio', 0), + 'mean_OL': final_results.get('mean_OL', 0), + 'VB': final_results['VB'], + 'IB': final_results.get('IB', 0)} + + return relevant_results diff --git a/TrackToLearn/experiment/train.py b/TrackToLearn/experiment/train.py deleted file mode 100644 index 04066ca..0000000 --- a/TrackToLearn/experiment/train.py +++ /dev/null @@ -1,333 +0,0 @@ -import json -import numpy as np -import random -import os -import torch - -from dipy.tracking.metrics import length as slength -from os.path import join as pjoin - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.environments.env import BaseEnv -from TrackToLearn.experiment.tracker import Tracker -from TrackToLearn.experiment.ttl import TrackToLearnExperiment -from TrackToLearn.experiment.experiment import add_reward_args - -assert torch.cuda.is_available(), "Training is only possible on CUDA devices." - - -class TrackToLearnTraining(TrackToLearnExperiment): - """ - Main RL tracking experiment - """ - - def __init__( - self, - train_dto: dict, - comet_experiment, - ): - """ - Parameters - ---------- - train_dto: dict - Dictionnary containing the training parameters. - Put into a dictionnary to prevent parameter errors if modified. - """ - self.experiment_path = train_dto['path'] - self.experiment = train_dto['experiment'] - self.id = train_dto['id'] - - # RL parameters - self.max_ep = train_dto['max_ep'] - self.log_interval = train_dto['log_interval'] - self.prob = train_dto['prob'] - self.lr = train_dto['lr'] - self.gamma = train_dto['gamma'] - - # Tracking parameters - self.add_neighborhood = train_dto['add_neighborhood'] - self.step_size = train_dto['step_size'] - self.dataset_file = train_dto['dataset_file'] - self.subject_id = train_dto['subject_id'] - self.valid_dataset_file = train_dto['valid_dataset_file'] - self.valid_subject_id = train_dto['valid_subject_id'] - self.reference_file = train_dto['reference_file'] - self.scoring_data = train_dto['scoring_data'] - self.rng_seed = train_dto['rng_seed'] - self.npv = train_dto['npv'] - self.theta = train_dto['theta'] - self.min_length = train_dto['min_length'] - self.max_length = train_dto['max_length'] - self.interface_seeding = train_dto['interface_seeding'] - self.cmc = train_dto['cmc'] - self.asymmetric = train_dto['asymmetric'] - - # Reward parameters - self.alignment_weighting = train_dto['alignment_weighting'] - self.straightness_weighting = train_dto['straightness_weighting'] - self.length_weighting = train_dto['length_weighting'] - self.target_bonus_factor = train_dto['target_bonus_factor'] - self.exclude_penalty_factor = train_dto['exclude_penalty_factor'] - self.angle_penalty_factor = train_dto['angle_penalty_factor'] - - # Model parameters - self.hidden_dims = train_dto['hidden_dims'] - self.load_policy = train_dto['load_policy'] - self.comet_experiment = comet_experiment - self.render = train_dto['render'] - self.run_tractometer = train_dto['run_tractometer'] - self.last_episode = 0 - self.n_actor = train_dto['n_actor'] - self.n_signal = train_dto['n_signal'] - self.n_dirs = train_dto['n_dirs'] - - self.compute_reward = True # Always compute reward during training - self.fa_map = None - self.no_retrack = train_dto['no_retrack'] - - self.device = torch.device( - "cuda" if torch.cuda.is_available() else "cpu") - - self.use_comet = train_dto['use_comet'] - - # RNG - torch.manual_seed(self.rng_seed) - np.random.seed(self.rng_seed) - self.rng = np.random.RandomState(seed=self.rng_seed) - random.seed(self.rng_seed) - - directory = pjoin(self.experiment_path, 'model') - if not os.path.exists(directory): - os.makedirs(directory) - - self.hyperparameters = { - # RL parameters - 'id': self.id, - 'experiment': self.experiment, - 'max_ep': self.max_ep, - 'log_interval': self.log_interval, - 'lr': self.lr, - 'gamma': self.gamma, - # Data parameters - 'add_neighborhood': self.add_neighborhood, - 'step_size': self.step_size, - 'random_seed': self.rng_seed, - 'dataset_file': self.dataset_file, - 'subject_id': self.subject_id, - 'n_seeds_per_voxel': self.npv, - 'max_angle': self.theta, - 'min_length': self.min_length, - 'max_length': self.max_length, - 'cmc': self.cmc, - 'asymmetric': self.asymmetric, - # Model parameters - 'experiment_path': self.experiment_path, - 'hidden_dims': self.hidden_dims, - 'last_episode': self.last_episode, - 'n_actor': self.n_actor, - 'n_signal': self.n_signal, - 'n_dirs': self.n_dirs, - 'interface_seeding': self.interface_seeding, - 'no_retrack': self.no_retrack, - # Reward parameters - 'alignment_weighting': self.alignment_weighting, - 'straightness_weighting': self.straightness_weighting, - 'length_weighting': self.length_weighting, - 'target_bonus_factor': self.target_bonus_factor, - 'exclude_penalty_factor': self.exclude_penalty_factor, - 'angle_penalty_factor': self.angle_penalty_factor, - } - - def save_hyperparameters(self): - - self.hyperparameters.update({'input_size': self.input_size, - 'action_size': self.action_size}) - directory = pjoin(self.experiment_path, "model") - with open( - pjoin(directory, "hyperparameters.json"), - 'w' - ) as json_file: - json_file.write( - json.dumps( - self.hyperparameters, - indent=4, - separators=(',', ': '))) - - def save_model(self, alg): - - directory = pjoin(self.experiment_path, "model") - if not os.path.exists(directory): - os.makedirs(directory) - alg.policy.save(directory, "last_model_state") - - def rl_train( - self, - alg: RLAlgorithm, - env: BaseEnv, - back_env: BaseEnv, - valid_env: BaseEnv, - back_valid_env: BaseEnv, - ): - """ Train the RL algorithm for N epochs. An epoch here corresponds to - running tracking on the training set until all streamlines are done. - This loop should be algorithm-agnostic. Between epochs, report stats - so they can be monitored during training - - Parameters: - ----------- - alg: RLAlgorithm - The RL algorithm, either TD3, PPO or any others - env: BaseEnv - The tracking environment - back_env: BaseEnv - The backward tracking environment. Should be more or less - the same as the "forward" tracking environment but initalized - with half-streamlines - """ - def mean_losses(dic): - return {k: np.mean(dic[k]) for k in dic.keys()} - - # Current epoch - i_episode = 0 - # Transition counter - t = 0 - - # Initialize Trackers, which will handle streamline generation and - # trainnig - train_tracker = Tracker( - alg, env, back_env, self.n_actor, self.interface_seeding, - self.no_retrack, compress=0.0) - - valid_tracker = Tracker( - alg, valid_env, back_valid_env, self.n_actor, - self.interface_seeding, self.no_retrack, - compress=0.0) - - # Run tracking before training to see what an untrained network does - valid_tractogram, valid_reward = valid_tracker.track_and_validate() - scores = self.score_tractogram(valid_tractogram) - self.save_model(alg) - - # Display the results of the untrained network - self.log( - valid_tractogram, env, valid_reward, i_episode, - scores=scores) - - # Main training loop - while i_episode < self.max_ep: - - # Last episode/epoch. Was initially for resuming experiments but - # since they take so little time I just restart them from scratch - # Not sure what to do with this - self.last_episode = i_episode - - # Train for an episode - tractogram, losses, reward = train_tracker.track_and_train() - - lens = [slength(s) for s in tractogram.streamlines] - - avg_length = np.mean(lens) # Euclidian length - - lengths = [len(s) for s in tractogram] - # Keep track of how many transitions were gathered - t += sum(lengths) - avg_reward = reward / self.n_actor - - print( - f"Total T: {t+1} Episode Num: {i_episode+1} " - f"Avg len: {avg_length:.3f} Avg. reward: " - f"{avg_reward:.3f}") - - # Update monitors - self.train_reward_monitor.update(avg_reward) - self.train_reward_monitor.end_epoch(i_episode) - - i_episode += 1 - if self.use_comet and self.comet_experiment is not None: - self.comet_monitor.update_train( - self.train_reward_monitor, i_episode) - mean_ep_losses = mean_losses(losses) - self.comet_monitor.log_losses(mean_ep_losses, i_episode) - - # Time to do a valid run and display stats - if i_episode % self.log_interval == 0: - - # Validation run - valid_tractogram, valid_reward = \ - valid_tracker.track_and_validate() - scores = self.score_tractogram(valid_tractogram) - - # Display what the network is capable-of "now" - self.log( - valid_tractogram, env, valid_reward, i_episode, - scores=scores) - - self.save_model(alg) - - # Validation run - valid_tractogram, valid_reward = valid_tracker.track_and_validate() - scores = self.score_tractogram(valid_tractogram) - - # Display what the network is capable-of "now" - self.log( - valid_tractogram, env, valid_reward, i_episode, - scores=scores) - - self.save_model(alg) - - def run(self): - """ - Main method where the magic happens - """ - - # Instantiate environment. Actions will be fed to it and new - # states will be returned. The environment updates the streamline - # internally - back_env, env = self.get_envs() - back_valid_env, valid_env = self.get_valid_envs() - - # Get example state to define NN input size - self.input_size = env.get_state_size() - self.action_size = env.get_action_size() - - # Voxel size - self.voxel_size = env.get_voxel_size() - - max_traj_length = env.max_nb_steps - - # The RL training algorithm - alg = self.get_alg(max_traj_length) - - # Save hyperparameters to differentiate experiments later - self.save_hyperparameters() - - self.setup_monitors() - - # Setup comet monitors to monitor experiment as it goes along - if self.use_comet: - self.setup_comet() - - # If included, load pretrained policies - if self.load_policy: - alg.policy.load(self.load_policy) - alg.target.load(self.load_policy) - - # Start training ! - self.rl_train(alg, env, back_env, valid_env, back_valid_env) - - torch.cuda.empty_cache() - - -def add_rl_args(parser): - parser.add_argument('--max_ep', default=200000, type=int, - help='Number of episodes to run the training ' - 'algorithm') - parser.add_argument('--log_interval', default=20, type=int, - help='Log statistics, update comet, save the model ' - 'and hyperparameters at n steps') - parser.add_argument('--lr', default=1e-6, type=float, - help='Learning rate') - parser.add_argument('--gamma', default=0.925, type=float, - help='Gamma param for reward discounting') - - add_reward_args(parser) diff --git a/TrackToLearn/experiment/ttl.py b/TrackToLearn/experiment/ttl.py deleted file mode 100644 index eadbb3e..0000000 --- a/TrackToLearn/experiment/ttl.py +++ /dev/null @@ -1,401 +0,0 @@ -import os -import numpy as np - -from os.path import join as pjoin -from typing import Tuple - -from challenge_scoring.metrics.scoring import score_submission -from challenge_scoring.utils.attributes import load_attribs -from dipy.io.stateful_tractogram import Space, StatefulTractogram -from dipy.io.streamline import save_tractogram -from dipy.tracking.metrics import length as slength -from nibabel.streamlines import Tractogram - -from TrackToLearn.environments.backward_tracking_env import \ - BackwardTrackingEnvironment -from TrackToLearn.environments.env import BaseEnv -from TrackToLearn.environments.interface_tracking_env import ( - InterfaceNoisyTrackingEnvironment, - InterfaceTrackingEnvironment) -from TrackToLearn.environments.noisy_tracker import ( - BackwardNoisyTrackingEnvironment, - NoisyRetrackingEnvironment, - NoisyTrackingEnvironment) -from TrackToLearn.environments.retracking_env import RetrackingEnvironment -from TrackToLearn.environments.tracking_env import TrackingEnvironment -from TrackToLearn.experiment.experiment import Experiment -from TrackToLearn.utils.utils import LossHistory -from TrackToLearn.utils.comet_monitor import CometMonitor - - -class TrackToLearnExperiment(Experiment): - """ Base class for TrackToLearn experiments, even if they're not actually - RL (such as supervised learning). This "abstract" class provides helper - methods for loading data, displaying stats and everything that is common - to all TrackToLearn experiments - """ - - def run(self): - """ Main method where data is loaded, classes are instanciated, - everything is set up. - """ - pass - - def setup_monitors(self): - # RL monitors - self.train_reward_monitor = LossHistory( - "Train Reward - Alignment", "train_reward", self.experiment_path) - self.reward_monitor = LossHistory( - "Reward - Alignment", "reward", self.experiment_path) - self.actor_loss_monitor = LossHistory( - "Loss - Actor Policy Loss", "actor_loss", self.experiment_path) - self.critic_loss_monitor = LossHistory( - "Loss - Critic MSE Loss", "critic_loss", self.experiment_path) - self.len_monitor = LossHistory( - "Length", "length", self.experiment_path) - - # Tractometer monitors - # TODO: Infer the number of bundles from the GT - if self.run_tractometer: - self.vc_monitor = LossHistory( - "Valid Connections", "vc", self.experiment_path) - self.ic_monitor = LossHistory( - "Invalid Connections", "ic", self.experiment_path) - self.nc_monitor = LossHistory( - "Non-Connections", "nc", self.experiment_path) - self.vb_monitor = LossHistory( - "Valid Bundles", "VB", self.experiment_path) - self.ib_monitor = LossHistory( - "Invalid Bundles", "IB", self.experiment_path) - self.ol_monitor = LossHistory( - "Overlap monitor", "ol", self.experiment_path) - - else: - self.vc_monitor = None - self.ic_monitor = None - self.nc_monitor = None - self.vb_monitor = None - self.ib_monitor = None - self.ol_monitor = None - - # Initialize monitors here as the first pass won't include losses - self.actor_loss_monitor.update(0) - self.actor_loss_monitor.end_epoch(0) - self.critic_loss_monitor.update(0) - self.critic_loss_monitor.end_epoch(0) - - def setup_comet(self, prefix=''): - """ Setup comet environment - """ - # The comet object that will handle monitors - self.comet_monitor = CometMonitor( - self.comet_experiment, self.id, self.experiment_path, - prefix, self.render) - - self.comet_monitor.log_parameters(self.hyperparameters) - - def _get_env_dict_and_dto( - self, interface_tracking_env, no_retrack, noisy - ) -> Tuple[dict, dict]: - - env_dto = { - 'dataset_file': self.dataset_file, - 'subject_id': self.subject_id, - 'interface_seeding': self.interface_seeding, - 'fa_map': self.fa_map, - 'n_signal': self.n_signal, - 'n_dirs': self.n_dirs, - 'step_size': self.step_size, - 'theta': self.theta, - 'min_length': self.min_length, - 'max_length': self.max_length, - 'cmc': self.cmc, - 'asymmetric': self.asymmetric, - 'prob': self.prob, - 'npv': self.npv, - 'rng': self.rng, - 'scoring_data': self.scoring_data, - 'reference': self.reference_file, - 'alignment_weighting': self.alignment_weighting, - 'straightness_weighting': self.straightness_weighting, - 'length_weighting': self.length_weighting, - 'target_bonus_factor': self.target_bonus_factor, - 'exclude_penalty_factor': self.exclude_penalty_factor, - 'angle_penalty_factor': self.angle_penalty_factor, - 'add_neighborhood': self.add_neighborhood, - 'compute_reward': self.compute_reward, - 'device': self.device - } - - if noisy: - class_dict = { - 'tracker': NoisyTrackingEnvironment, - 'back_tracker': BackwardNoisyTrackingEnvironment, - 'retracker': NoisyRetrackingEnvironment, - 'interface_tracking_env': InterfaceNoisyTrackingEnvironment - } - else: - class_dict = { - 'tracker': TrackingEnvironment, - 'back_tracker': BackwardTrackingEnvironment, - 'retracker': RetrackingEnvironment, - 'interface_tracking_env': InterfaceTrackingEnvironment - } - return class_dict, env_dto - - def get_envs(self) -> Tuple[BaseEnv, BaseEnv]: - """ Build environments - - Returns: - -------- - back_env: BaseEnv - Backward environment that will be pre-initialized - with half-streamlines - env: BaseEnv - "Forward" environment only initialized with seeds - """ - - class_dict, env_dto = self._get_env_dict_and_dto( - self.interface_seeding, self.no_retrack, False) - - # Someone with better knowledge of design patterns could probably - # clean this - if self.interface_seeding: - env = class_dict['interface_tracking_env'].from_dataset( - env_dto, 'training') - back_env = None - else: - if self.no_retrack: - env = class_dict['tracker'].from_dataset(env_dto, 'training') - back_env = class_dict['back_tracker'].from_env( - env_dto, env) - else: - env = class_dict['tracker'].from_dataset(env_dto, 'training') - back_env = class_dict['retracker'].from_env( - env_dto, env) - - return back_env, env - - def get_valid_envs(self) -> Tuple[BaseEnv, BaseEnv]: - """ Build environments - - Returns: - -------- - back_env: BaseEnv - Backward environment that will be pre-initialized - with half-streamlines - env: BaseEnv - "Forward" environment only initialized with seeds - """ - - class_dict, env_dto = self._get_env_dict_and_dto( - self.interface_seeding, self.no_retrack, True) - - # Someone with better knowledge of design patterns could probably - # clean this - if self.interface_seeding: - env = class_dict['interface_tracking_env'].from_dataset( - env_dto, 'validation') - back_env = None - else: - if self.no_retrack: - env = class_dict['tracker'].from_dataset(env_dto, 'validation') - back_env = class_dict['back_tracker'].from_env( - env_dto, env) - else: - env = class_dict['tracker'].from_dataset(env_dto, 'validation') - back_env = class_dict['retracker'].from_env( - env_dto, env) - - return back_env, env - - def get_tracking_envs(self): - """ Generate environments according to tracking parameters. - - Returns: - -------- - back_env: BaseEnv - Backward environment that will be pre-initialized - with half-streamlines - env: BaseEnv - "Forward" environment only initialized with seeds - """ - - class_dict, env_dto = self._get_env_dict_and_dto( - self.interface_seeding, self.no_retrack, True) - - # Update DTO to include indiv. files instead of hdf5 - env_dto.update({ - 'in_odf': self.in_odf, - 'wm_file': self.wm_file, - 'in_seed': self.in_seed, - 'in_mask': self.in_mask, - 'sh_basis': self.sh_basis, - 'reference': self.in_odf, # reference is inferred from the fODF - # file instead of being passed directly. - }) - - # Someone with better knowledge of design patterns could probably - # clean this - if self.interface_seeding: - env = class_dict['interface_tracking_env'].from_files(env_dto) - back_env = None - else: - if self.no_retrack: - env = class_dict['tracker'].from_files(env_dto) - back_env = class_dict['back_tracker'].from_env(env_dto, env) - else: - env = class_dict['tracker'].from_files(env_dto) - back_env = class_dict['retracker'].from_env(env_dto, env) - - return back_env, env - - def score_tractogram(self, tractogram): - - # Load bundle attributes for tractometer - # TODO: No need to load this every time, should only be loaded - # once - gt_bundles_attribs_path = pjoin( - self.scoring_data, 'gt_bundles_attributes.json') - basic_bundles_attribs = load_attribs(gt_bundles_attribs_path) - - filename = self.save_vox_tractogram(tractogram) - - # Score tractogram - scores = score_submission( - filename, - self.scoring_data, - basic_bundles_attribs, - compute_ic_ib=True) - - return scores - - def save_vox_tractogram( - self, - tractogram, - ) -> str: - """ - Saves a tractogram into vox space as generated by Track-to-Learn. - - Parameters - ---------- - tractogram: Tractogram - Tractogram generated at validation time. - - Returns: - -------- - filename: str - Filename of the saved tractogram. - """ - - # Save tractogram so it can be looked at, used by the tractometer - # and more - filename = pjoin( - self.experiment_path, "tractogram_{}_{}_{}.trk".format( - self.experiment, self.id, self.valid_subject_id)) - - # Prune empty streamlines, keep only streamlines that have more - # than the seed. - indices = [i for (i, s) in enumerate(tractogram.streamlines) - if len(s) > 1] - streamlines = tractogram.streamlines[indices] - data_per_streamline = tractogram.data_per_streamline[indices] - data_per_point = tractogram.data_per_point[indices] - - sft = StatefulTractogram( - streamlines, - self.reference_file, - Space.VOX, - data_per_streamline=data_per_streamline, - data_per_point=data_per_point) - - save_tractogram(sft, filename, bbox_valid_check=False) - - return filename - - def log( - self, - valid_tractogram: Tractogram, - env: BaseEnv, - valid_reward: float = 0, - i_episode: int = 0, - scores: dict = None, - filename: str = None - ): - """ Print training infos and log metrics to Comet, if - activated. - - Parameters - ---------- - valid_tractogram: Tractogram - Tractogram generated at validation time. - env: BaseEnv - Environment to render the streamlines - valid_reward: float - Sum of rewards obtained during validation. - i_episode: int - ith training episode. - scores: dict - Scores as computed by the tractometer. - filename: - Filename to save a screenshot of the rendered environment. - """ - - lens = [slength(s) for s in valid_tractogram.streamlines] - avg_valid_reward = valid_reward / len(lens) - avg_length = np.mean(lens) # Euclidian length - - print('---------------------------------------------------') - print(self.experiment_path) - print('Episode {} \t avg length: {} \t total reward: {}'.format( - i_episode, - avg_length, - avg_valid_reward)) - print('---------------------------------------------------') - - if self.render: - # Save image of tractogram to be displayed in comet - directory = pjoin(self.experiment_path, 'render') - if not os.path.exists(directory): - os.makedirs(directory) - - filename = pjoin( - directory, '{}.png'.format(i_episode)) - env.render( - valid_tractogram, - filename) - - if scores is not None: - self.vc_monitor.update(scores['VC']) - self.ic_monitor.update(scores['IC']) - self.nc_monitor.update(scores['NC']) - self.vb_monitor.update(scores['VB']) - self.ib_monitor.update(scores['IB']) - self.ol_monitor.update(scores['mean_OL']) - - self.vc_monitor.end_epoch(i_episode) - self.ic_monitor.end_epoch(i_episode) - self.nc_monitor.end_epoch(i_episode) - self.vb_monitor.end_epoch(i_episode) - self.ib_monitor.end_epoch(i_episode) - self.ol_monitor.end_epoch(i_episode) - - # Update monitors - self.len_monitor.update(avg_length) - self.len_monitor.end_epoch(i_episode) - - self.reward_monitor.update(avg_valid_reward) - self.reward_monitor.end_epoch(i_episode) - - if self.use_comet and self.comet_experiment is not None: - # Update comet - self.comet_monitor.update( - self.reward_monitor, - self.len_monitor, - self.vc_monitor, - self.ic_monitor, - self.nc_monitor, - self.vb_monitor, - self.ib_monitor, - self.ol_monitor, - i_episode=i_episode) diff --git a/TrackToLearn/experiment/validators.py b/TrackToLearn/experiment/validators.py new file mode 100644 index 0000000..0599a2b --- /dev/null +++ b/TrackToLearn/experiment/validators.py @@ -0,0 +1,9 @@ +class Validator(object): + + def __init__(self): + + self.name = '' + + def __call__(self, filename): + + assert False, 'not implemented' diff --git a/TrackToLearn/oracles/oracle.py b/TrackToLearn/oracles/oracle.py new file mode 100644 index 0000000..f33d1ef --- /dev/null +++ b/TrackToLearn/oracles/oracle.py @@ -0,0 +1,86 @@ +import numpy as np +import torch +from dipy.tracking.streamline import set_number_of_points + +from TrackToLearn.oracles.transformer_oracle import TransformerOracle + + +class OracleSingleton: + _self = None + + def __new__(cls, *args, **kwargs): + if cls._self is None: + print('Instanciating new Oracle, should only happen once.') + cls._self = super().__new__(cls) + return cls._self + + def __init__(self, checkpoint: str, device: str, batch_size=4096): + self.checkpoint = torch.load(checkpoint) + + hyper_parameters = self.checkpoint["hyper_parameters"] + # The model's class is saved in hparams + # The model's class is saved in hparams + models = { + 'TransformerOracle': TransformerOracle + } + + # Load it from the checkpoint + self.model = models[hyper_parameters[ + 'name']].load_from_checkpoint(self.checkpoint).to(device) + + self.model.eval() + self.batch_size = batch_size + + self.device = device + + def predict(self, streamlines): + # Total number of predictions to return + N = len(streamlines) + # Placeholders for input and output data + placeholder = torch.zeros( + (self.batch_size, 127, 3), pin_memory=True) + result = torch.zeros((N), dtype=torch.float, device=self.device) + + # Get the first batch + batch = streamlines[:self.batch_size] + N_batch = len(batch) + # Resample streamlines to fixed number of point to set all + # sequences to same length + data = set_number_of_points(batch, 128) + # Compute streamline features as the directions between points + dirs = np.diff(data, axis=1) + # Send the directions to pinned memory + placeholder[:N_batch] = torch.from_numpy(dirs) + # Send the pinned memory to GPU asynchronously + input_data = placeholder[:N_batch].to( + self.device, non_blocking=True, dtype=torch.float) + i = 0 + + while i <= N // self.batch_size: + start = (i+1) * self.batch_size + end = min(start + self.batch_size, N) + # Prefetch the next batch + if start < end: + batch = streamlines[start:end] + # Resample streamlines to fixed number of point to set all + # sequences to same length + data = set_number_of_points(batch, 128) + # Compute streamline features as the directions between points + dirs = np.diff(data, axis=1) + # Put the directions in pinned memory + placeholder[:end-start] = torch.from_numpy(dirs) + + with torch.cuda.amp.autocast(): + with torch.no_grad(): + predictions = self.model(input_data) + result[ + i * self.batch_size: + (i * self.batch_size) + self.batch_size] = predictions + i += 1 + if i >= N // self.batch_size: + break + # Send the pinned memory to GPU asynchronously + input_data = placeholder[:end-start].to( + self.device, non_blocking=True, dtype=torch.float) + + return result.cpu().numpy() diff --git a/TrackToLearn/oracles/transformer_oracle.py b/TrackToLearn/oracles/transformer_oracle.py new file mode 100644 index 0000000..28945de --- /dev/null +++ b/TrackToLearn/oracles/transformer_oracle.py @@ -0,0 +1,186 @@ +import math +import torch + +from torch import nn, Tensor + + +class PositionalEncoding(nn.Module): + """ From + https://pytorch.org/tutorials/beginner/transformer_tutorial.htm://pytorch.org/tutorials/beginner/transformer_tutorial.html # noqa E504 + """ + + def __init__( + self, d_model: int, dropout: float = 0.1, max_len: int = 5000 + ): + super().__init__() + self.dropout = nn.Dropout(p=dropout) + + position = torch.arange(max_len).unsqueeze(1) + div_term = torch.exp(torch.arange(0, d_model, 2) + * (-math.log(10000.0) / d_model)) + pe = torch.zeros(max_len, 1, d_model) + pe[:, 0, 0::2] = torch.sin(position * div_term) + pe[:, 0, 1::2] = torch.cos(position * div_term) + self.register_buffer('pe', pe) + + def forward(self, x: Tensor) -> Tensor: + """ + Arguments: + x: Tensor, shape ``[seq_len, batch_size, embedding_dim]`` + """ + x = x.permute(1, 0, 2) + x = x + self.pe[:x.size(0)] + x = self.dropout(x) + x = x.permute(1, 0, 2) + return x + + +class TransformerOracle(nn.Module): + + def __init__(self, input_size, output_size, n_head, n_layers, lr): + super(TransformerOracle, self).__init__() + + self.input_size = input_size + self.output_size = output_size + self.lr = lr + self.n_head = n_head + self.n_layers = n_layers + + self.embedding_size = 32 + + self.cls_token = nn.Parameter(torch.randn((3))) + + layer = nn.TransformerEncoderLayer( + self.embedding_size, n_head, batch_first=True) + + self.embedding = nn.Sequential( + *(nn.Linear(3, self.embedding_size), + nn.ReLU())) + + self.pos_encoding = PositionalEncoding( + self.embedding_size, max_len=(input_size//3) + 1) + self.bert = nn.TransformerEncoder(layer, self.n_layers) + self.head = nn.Linear(self.embedding_size, output_size) + + self.sig = nn.Sigmoid() + + def configure_optimizers(self): + optimizer = torch.optim.AdamW(self.parameters(), lr=self.lr) + scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau( + optimizer, patience=2, threshold=0.01, verbose=True) + return { + "optimizer": optimizer, + "lr_scheduler": scheduler, + "monitor": "pred_train_loss" + } + + def forward(self, x): + + N, L, D = x.shape # Batch size, length of sequence, nb. of dims + cls_tokens = self.cls_token.repeat(N, 1, 1) + x = torch.cat((cls_tokens, x), dim=1) + x = self.embedding(x) * math.sqrt(self.embedding_size) + + encoding = self.pos_encoding(x) + + hidden = self.bert(encoding) + + y = self.head(hidden[:, 0]) + + y = self.sig(y) + + return y.squeeze(-1) + + @classmethod + def load_from_checkpoint(cls, checkpoint: dict): + + hyper_parameters = checkpoint["hyper_parameters"] + + input_size = hyper_parameters['input_size'] + output_size = hyper_parameters['output_size'] + lr = hyper_parameters['lr'] + n_head = hyper_parameters['n_head'] + n_layers = hyper_parameters['n_layers'] + + model = TransformerOracle( + input_size, output_size, n_head, n_layers, lr) + + model_weights = checkpoint["state_dict"] + + # update keys by dropping `auto_encoder.` + for key in list(model_weights): + model_weights[key] = \ + model_weights.pop(key) + + model.load_state_dict(model_weights) + model.eval() + + return model + +# class TransformerOracle(nn.Module): +# +# def __init__(self, input_size, output_size, n_head, n_layers, lr): +# super(TransformerOracle, self).__init__() +# +# self.input_size = input_size +# self.output_size = output_size +# self.lr = lr +# self.n_head = n_head +# self.n_layers = n_layers +# +# self.embedding_size = 32 +# +# layer = nn.TransformerEncoderLayer( +# self.embedding_size, n_head, batch_first=True) +# +# self.embedding = nn.Sequential( +# *(nn.Linear(3, self.embedding_size), +# nn.ReLU())) +# +# self.pos_encoding = PositionalEncoding( +# self.embedding_size, max_len=input_size//3) +# self.bert = nn.TransformerEncoder(layer, self.n_layers) +# self.head = nn.Linear(self.embedding_size, output_size) +# +# self.sig = nn.Sigmoid() +# +# def forward(self, x): +# x = self.embedding(x) * math.sqrt(self.embedding_size) +# +# encoding = self.pos_encoding(x) +# +# hidden = self.bert(encoding) +# +# pooled = hidden.mean(dim=1) +# +# y = self.head(pooled) +# +# y = self.sig(y) +# +# return y.squeeze(-1) +# +# @classmethod +# def load_from_checkpoint(cls, checkpoint: dict): +# +# hyper_parameters = checkpoint["hyper_parameters"] +# +# input_size = hyper_parameters['input_size'] +# output_size = hyper_parameters['output_size'] +# lr = hyper_parameters['lr'] +# n_head = hyper_parameters['n_head'] +# n_layers = hyper_parameters['n_layers'] +# +# model = TransformerOracle( +# input_size, output_size, n_head, n_layers, lr) +# +# model_weights = checkpoint["state_dict"] +# +# # update keys by dropping `auto_encoder.` +# for key in list(model_weights): +# model_weights[key] = \ +# model_weights.pop(key) +# +# model.load_state_dict(model_weights) +# model.eval() +# +# return model diff --git a/TrackToLearn/runners/ttl_track.py b/TrackToLearn/runners/ttl_track.py index 67bd584..387b7c1 100755 --- a/TrackToLearn/runners/ttl_track.py +++ b/TrackToLearn/runners/ttl_track.py @@ -1,4 +1,4 @@ -#! /usr/bin/env python3 +#!/usr/bin/env python3 import argparse import json import nibabel as nib @@ -11,37 +11,29 @@ from os.path import join from dipy.io.utils import get_reference_info, create_tractogram_header +from nibabel.streamlines import detect_format from scilpy.io.utils import (add_overwrite_arg, add_sh_basis_args, assert_inputs_exist, assert_outputs_exist, verify_compression_th) from scilpy.tracking.utils import verify_streamline_length_options -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.algorithms.acktr import ACKTR -from TrackToLearn.algorithms.ddpg import DDPG -from TrackToLearn.algorithms.ppo import PPO -from TrackToLearn.algorithms.trpo import TRPO -from TrackToLearn.algorithms.td3 import TD3 -from TrackToLearn.algorithms.sac import SAC from TrackToLearn.algorithms.sac_auto import SACAuto -from TrackToLearn.algorithms.vpg import VPG from TrackToLearn.datasets.utils import MRIDataVolume -from TrackToLearn.experiment.tracker import Tracker -from TrackToLearn.experiment.ttl import TrackToLearnExperiment + +from TrackToLearn.experiment.experiment import Experiment +from TrackToLearn.tracking.tracker import Tracker # Define the example model paths from the install folder. # Hackish ? I'm not aware of a better solution but I'm # open to suggestions. _ROOT = os.sep.join(os.path.normpath( os.path.dirname(__file__)).split(os.sep)[:-2]) -DEFAULT_WM_MODEL = os.path.join( - _ROOT, 'example_models', 'SAC_Auto_ISMRM2015_WM') -DEFAULT_INTERFACE_MODEL = os.path.join( - _ROOT, 'example_models', 'SAC_Auto_ISMRM2015_interface') +DEFAULT_MODEL = os.path.join( + _ROOT, 'models') -class TrackToLearnTrack(TrackToLearnExperiment): +class TrackToLearnTrack(Experiment): """ TrackToLearn testing script. Should work on any model trained with a TrackToLearn experiment """ @@ -58,6 +50,7 @@ def __init__( self.in_seed = track_dto['in_seed'] self.in_mask = track_dto['in_mask'] + self.input_wm = track_dto['input_wm'] self.dataset_file = None self.subject_id = None @@ -65,7 +58,11 @@ def __init__( self.reference_file = track_dto['in_mask'] self.out_tractogram = track_dto['out_tractogram'] - self.prob = track_dto['prob'] + self.noise = track_dto['noise'] + + self.binary_stopping_threshold = \ + track_dto['binary_stopping_threshold'] + self.n_actor = track_dto['n_actor'] self.npv = track_dto['npv'] self.min_length = track_dto['min_length'] @@ -75,16 +72,15 @@ def __init__( self.sh_basis = track_dto['sh_basis'] self.save_seeds = track_dto['save_seeds'] - self.run_tractometer = False - self.compute_reward = False + # Tractometer parameters + self.tractometer_validator = False self.scoring_data = None - self.render = False - if not track_dto['cpu'] and not torch.cuda.is_available(): - print('No CUDA installation found. Defaulting to CPU tracking.') + self.compute_reward = False + self.render = False self.device = torch.device( - "cuda" if torch.cuda.is_available() and not track_dto['cpu'] + "cuda" if torch.cuda.is_available() else "cpu") self.fa_map = None @@ -95,7 +91,7 @@ def __init__( data=fa_image.get_fdata(), affine_vox2rasmm=fa_image.affine) - self.policy = track_dto['policy'] + self.agent = track_dto['agent'] self.hyperparameters = track_dto['hyperparameters'] with open(self.hyperparameters, 'r') as json_file: @@ -103,23 +99,18 @@ def __init__( self.algorithm = hyperparams['algorithm'] self.step_size = float(hyperparams['step_size']) self.add_neighborhood = hyperparams['add_neighborhood'] - self.voxel_size = float(hyperparams['voxel_size']) + self.voxel_size = hyperparams.get('voxel_size', 2.0) self.theta = hyperparams['max_angle'] - self.alignment_weighting = hyperparams['alignment_weighting'] - self.straightness_weighting = hyperparams['straightness_weighting'] - self.length_weighting = hyperparams['length_weighting'] - self.target_bonus_factor = hyperparams['target_bonus_factor'] - self.exclude_penalty_factor = hyperparams['exclude_penalty_factor'] - self.angle_penalty_factor = hyperparams['angle_penalty_factor'] self.hidden_dims = hyperparams['hidden_dims'] - self.n_signal = hyperparams['n_signal'] self.n_dirs = hyperparams['n_dirs'] - self.interface_seeding = track_dto['interface'] or \ - hyperparams['interface_seeding'] - self.no_retrack = hyperparams.get('no_retrack', False) + self.interface_seeding = hyperparams['interface_seeding'] - self.cmc = hyperparams['cmc'] - self.asymmetric = hyperparams['asymmetric'] + self.alignment_weighting = 0.0 + # Oracle parameters + self.oracle_checkpoint = None + self.oracle_bonus = 0.0 + self.oracle_validator = False + self.oracle_stopping_criterion = False self.random_seed = track_dto['rng_seed'] torch.manual_seed(self.random_seed) @@ -133,42 +124,36 @@ def run(self): """ Main method where the magic happens """ + # Presume iso vox + ref_img = nib.load(self.reference_file) + tracking_voxel_size = ref_img.header.get_zooms()[0] + + # # Set the voxel size so the agent traverses the same "quantity" of + # # voxels per step as during training. + step_size_mm = self.step_size + if abs(float(tracking_voxel_size) - float(self.voxel_size)) >= 0.1: + step_size_mm = ( + float(tracking_voxel_size) / float(self.voxel_size)) * \ + self.step_size + + print("Agent was trained on a voxel size of {}mm and a " + "step size of {}mm.".format(self.voxel_size, self.step_size)) + + print("Subject has a voxel size of {}mm, setting step size to " + "{}mm.".format(tracking_voxel_size, step_size_mm)) + # Instanciate environment. Actions will be fed to it and new # states will be returned. The environment updates the streamline - print('Loading environment.') - back_env, env = self.get_tracking_envs() + env = self.get_tracking_env() + env.step_size_mm = step_size_mm # Get example state to define NN input size example_state = env.reset(0, 1) self.input_size = example_state.shape[1] self.action_size = env.get_action_size() - # Set the voxel size so the agent traverses the same "quantity" of - # voxels per step as during training. - tracking_voxel_size = env.get_voxel_size() - step_size_mm = (tracking_voxel_size / self.voxel_size) * \ - self.step_size - - print("Agent was trained on a voxel size of {}mm and a " - "step size of {}mm.".format(self.voxel_size, self.step_size)) - - print("Subject has a voxel size of {}mm, setting step size to " - "{}mm.".format(tracking_voxel_size, step_size_mm)) - - if back_env: - back_env.set_step_size(step_size_mm) - env.set_step_size(step_size_mm) - # Load agent - algs = {'VPG': VPG, - 'A2C': A2C, - 'ACKTR': ACKTR, - 'PPO': PPO, - 'TRPO': TRPO, - 'DDPG': DDPG, - 'TD3': TD3, - 'SAC': SAC, - 'SACAuto': SACAuto} + algs = {'SACAuto': SACAuto} rl_alg = algs[self.algorithm] print('Tracking with {} agent.'.format(self.algorithm)) @@ -182,27 +167,25 @@ def run(self): device=self.device) # Load pretrained policies - alg.policy.load(self.policy, 'last_model_state') + alg.agent.load(self.agent, 'last_model_state') # Initialize Tracker, which will handle streamline generation tracker = Tracker( - alg, env, back_env, self.n_actor, self.interface_seeding, - self.no_retrack, compress=self.compress, + alg, self.n_actor, compress=self.compress, + min_length=self.min_length, max_length=self.max_length, save_seeds=self.save_seeds) # Run tracking - tractogram = tracker.track() + env.load_subject() + filetype = detect_format(self.out_tractogram) + tractogram = tracker.track(env, filetype) - tractogram.affine_to_rasmm = env.affine_vox2rasmm - - filetype = nib.streamlines.detect_format(self.out_tractogram) - reference = get_reference_info(self.wm_file) + reference = get_reference_info(self.reference_file) header = create_tractogram_header(filetype, *reference) # Use generator to save the streamlines on-the-fly nib.streamlines.save(tractogram, self.out_tractogram, header=header) - # print('Saved {} streamlines'.format(len(tractogram))) def add_mandatory_options_tracking(p): @@ -212,12 +195,16 @@ def add_mandatory_options_tracking(p): 'fODF.\nCan be of any order and basis (including "full' '" bases for\nasymmetric ODFs). See also --sh_basis.') p.add_argument('in_seed', - help='Seeding mask (.nii.gz).') + help='Seeding mask (.nii.gz). Must be represent the WM/GM ' + 'interface.') p.add_argument('in_mask', help='Tracking mask (.nii.gz).\n' 'Tracking will stop outside this mask.') p.add_argument('out_tractogram', help='Tractogram output file (must be .trk or .tck).') + p.add_argument('--input_wm', action='store_true', + help='If set, append the WM mask to the input signal. The ' + 'agent must have been trained accordingly.') def add_out_options(p): @@ -246,37 +233,26 @@ def add_track_args(parser): add_out_options(parser) agent_group = parser.add_argument_group('Tracking agent options') - agent_group.add_argument('--policy', type=str, + agent_group.add_argument('--agent', type=str, help='Path to the folder containing .pth files.\n' 'If not set, will default to the example ' 'models.\n' - '[{}]'.format(DEFAULT_WM_MODEL)) + '[{}]'.format(DEFAULT_MODEL)) agent_group.add_argument( '--hyperparameters', type=str, help='Path to the .json file containing the ' 'hyperparameters of your tracking agent. \n' 'If not set, will default to the example models.\n' - '[{}]'.format(DEFAULT_INTERFACE_MODEL)) + '[{}]'.format(DEFAULT_MODEL)) agent_group.add_argument('--n_actor', type=int, default=10000, metavar='N', help='Number of streamlines to track simultaneous' 'ly.\nLimited by the size of your GPU and RAM. A ' 'higher value\nwill speed up tracking up to a ' 'point [%(default)s].') - agent_group.add_argument('--cpu', action='store_true', - help='Use CPU for tracking.\n' - 'Defaults to tracking on GPU without this ' - 'flag.') - seed_group = parser.add_argument_group('Seeding options') seed_group.add_argument('--npv', type=int, default=1, help='Number of seeds per voxel [%(default)s].') - seed_group.add_argument('--interface', action='store_true', - help='If set, tracking will be presumed to be ' - 'initialized at the WM/GM\ninterface and omits ' - 'the "retracking" phase".\n**Be mindful to ' - 'provide the proper seeding mask.**\n' - 'Defaults to WM seeding without this flag.') track_g = parser.add_argument_group('Tracking options') track_g.add_argument('--min_length', type=float, default=10., metavar='m', @@ -286,33 +262,34 @@ def add_track_args(parser): metavar='M', help='Maximum length of a streamline in mm. ' '[%(default)s]') - track_g.add_argument('--prob', type=float, default=0.0, metavar='sigma', - help='Add noise ~ N (0, `prob`) to the agent\'s\n' + track_g.add_argument('--noise', default=0.0, type=float, metavar='sigma', + help='Add noise ~ N (0, `noise`) to the agent\'s\n' 'output to make tracking more probabilistic.\n' - 'Around 0.1 generally gives good results ' - '[%(default)s].') + 'Should be between 0.0 and 0.1.' + '[%(default)s]') track_g.add_argument('--fa_map', type=str, default=None, - help='Scale the added noise (see `--prob`) according' + help='Scale the added noise (see `--noise`) according' '\nto the provided FA map (.nii.gz). Optional.') + track_g.add_argument( + '--binary_stopping_threshold', + type=float, default=0.1, + help='Lower limit for interpolation of tracking mask value.\n' + 'Tracking will stop below this threshold.') parser.add_argument('--rng_seed', default=1337, type=int, help='Random number generator seed [%(default)s].') -def verify_policy_option(parser, args): +def verify_agent_option(parser, args): - if (args.policy is not None and args.hyperparameters is None) or \ - (args.policy is None and args.hyperparameters is not None): - parser.error('You must specify both --policy and --hyperparameters ' - 'arguments.') + if (args.agent is not None and args.hyperparameters is None) or \ + (args.agent is None and args.hyperparameters is not None): + parser.error('You must specify both --agent and --hyperparameters ' + 'arguments or use the default model.') - if args.interface and args.policy is None: - args.policy = DEFAULT_INTERFACE_MODEL - args.hyperparameters = join( - DEFAULT_INTERFACE_MODEL, 'hyperparameters.json') - elif args.policy is None: - args.policy = DEFAULT_WM_MODEL + if args.agent is None: + args.agent = DEFAULT_MODEL args.hyperparameters = join( - DEFAULT_WM_MODEL, 'hyperparameters.json') + DEFAULT_MODEL, 'hyperparameters.json') def parse_args(): @@ -333,7 +310,7 @@ def parse_args(): verify_streamline_length_options(parser, args) verify_compression_th(args.compress) - verify_policy_option(parser, args) + verify_agent_option(parser, args) return args diff --git a/TrackToLearn/runners/ttl_track_from_hdf5.py b/TrackToLearn/runners/ttl_track_from_hdf5.py new file mode 100755 index 0000000..435e998 --- /dev/null +++ b/TrackToLearn/runners/ttl_track_from_hdf5.py @@ -0,0 +1,226 @@ +#!/usr/bin/env python +import argparse +import json +import nibabel as nib +import numpy as np +import random +import torch + +from argparse import RawTextHelpFormatter +from os.path import join + +from dipy.io.utils import get_reference_info, create_tractogram_header +from nibabel.streamlines import detect_format + +from TrackToLearn.algorithms.sac_auto import SACAuto +from TrackToLearn.datasets.utils import MRIDataVolume +from TrackToLearn.experiment.experiment import ( + add_experiment_args, + add_model_args, + add_oracle_args, + add_reward_args, + add_tracking_args, + add_tractometer_args) +from TrackToLearn.tracking.tracker import Tracker +from TrackToLearn.experiment.experiment import Experiment + + +class TrackToLearnValidation(Experiment): + """ TrackToLearn validing script. Should work on any model trained with a + TrackToLearn experiment. This runs tracking on a dataset (hdf5). + + TODO: Make this script as robust as the tracking. + """ + + def __init__( + self, + # Dataset params + valid_dto, + ): + """ + """ + self.experiment_path = valid_dto['path'] + self.experiment = valid_dto['experiment'] + self.id = valid_dto['id'] + self.render = False + + self.valid_dataset_file = self.dataset_file = valid_dto['dataset_file'] + + self.prob = valid_dto['prob'] + self.noise = valid_dto['noise'] + self.agent = valid_dto['agent'] + self.n_actor = valid_dto['n_actor'] + self.npv = valid_dto['npv'] + self.min_length = valid_dto['min_length'] + self.max_length = valid_dto['max_length'] + + self.alignment_weighting = valid_dto['alignment_weighting'] + # Oracle parameters + self.oracle_checkpoint = valid_dto['oracle_checkpoint'] + self.oracle_bonus = valid_dto['oracle_bonus'] + self.oracle_validator = valid_dto['oracle_validator'] + self.oracle_stopping_criterion = \ + valid_dto['oracle_stopping_criterion'] + + # Tractometer parameters + self.tractometer_validator = valid_dto['tractometer_validator'] + self.tractometer_dilate = valid_dto['tractometer_dilate'] + + self.scoring_data = valid_dto['scoring_data'] + + self.compute_reward = True + + self.fa_map = None + if valid_dto['fa_map'] is not None: + fa_image = nib.load( + valid_dto['fa_map']) + self.fa_map = MRIDataVolume( + data=fa_image.get_fdata(), + affine_vox2rasmm=fa_image.affine) + + with open(valid_dto['hyperparameters'], 'r') as json_file: + hyperparams = json.load(json_file) + self.algorithm = hyperparams['algorithm'] + self.step_size = float(hyperparams['step_size']) + self.add_neighborhood = hyperparams['add_neighborhood'] + self.voxel_size = float(hyperparams['voxel_size']) + self.theta = hyperparams['max_angle'] + self.epsilon = hyperparams.get('max_angular_error', 90) + self.hidden_dims = hyperparams['hidden_dims'] + self.n_signal = hyperparams['n_signal'] + self.n_dirs = hyperparams['n_dirs'] + self.interface_seeding = hyperparams['interface_seeding'] + self.cmc = hyperparams.get('cmc', False) + self.binary_stopping_threshold = hyperparams.get( + 'binary_stopping_threshold', 0.5) + self.asymmetric = hyperparams.get('asymmetric', False) + self.no_retrack = hyperparams.get('no_retrack', False) + self.action_type = hyperparams.get("action_type", "cartesian") + self.action_size = hyperparams.get("action_size", 3) + + self.comet_experiment = None + + self.device = torch.device( + "cuda" if torch.cuda.is_available() else "cpu") + + self.random_seed = valid_dto['rng_seed'] + torch.manual_seed(self.random_seed) + np.random.seed(self.random_seed) + self.rng = np.random.RandomState(seed=self.random_seed) + random.seed(self.random_seed) + + def run(self): + """ + Main method where the magic happens + """ + # Instanciate environment. Actions will be fed to it and new + # states will be returned. The environment updates the streamline + # internally + + env = self.get_valid_env() + + # Get example state to define NN input size + example_state = env.reset(0, 1) + self.input_size = example_state.shape[1] + self.action_size = env.get_action_size() + + # Set the voxel size so the agent traverses the same "quantity" of + # voxels per step as during training. + tracking_voxel_size = env.get_voxel_size() + step_size_mm = (tracking_voxel_size / self.voxel_size) * \ + self.step_size + + print("Agent was trained on a voxel size of {}mm and a " + "step size of {}mm.".format(self.voxel_size, self.step_size)) + + print("Subject has a voxel size of {}mm, setting step size to " + "{}mm.".format(tracking_voxel_size, step_size_mm)) + + env.set_step_size(step_size_mm) + + # Load agent + algs = {'SACAuto': SACAuto} + + rl_alg = algs[self.algorithm] + + # The RL training algorithm + alg = rl_alg( + self.input_size, + self.action_size, + self.hidden_dims, + n_actors=self.n_actor, + rng=self.rng, + device=self.device) + + # Load pretrained policies + alg.agent.load(self.agent, 'last_model_state') + + tracker = Tracker( + alg, self.n_actor, compress=0.0, + min_length=self.min_length, max_length=self.max_length, + save_seeds=False) + + out = join(self.experiment_path, "tractogram_{}_{}_{}.tck".format( + self.experiment, self.id, env.subject_id)) + + # Run tracking + filetype = detect_format(out) + env.load_subject() + tractogram = tracker.track(env, filetype) + + reference = get_reference_info(env.reference) + + header = create_tractogram_header(filetype, *reference) + + # Use generator to save the streamlines on-the-fly + nib.streamlines.save(tractogram, out, header=header) + # print('Saved {} streamlines'.format(len(tractogram))) + + +def add_valid_args(parser): + parser.add_argument('dataset_file', + help='Path to preprocessed datset file (.hdf5)') + parser.add_argument('agent', + help='Path to the policy') + parser.add_argument('subject_id', type=str, default=None, + help='Subject in HDF5 to track on.') + parser.add_argument('hyperparameters', + help='File containing the hyperparameters for the ' + 'experiment') + parser.add_argument('--fa_map', type=str, default=None, + help='FA map to influence STD for probabilistic' + + 'tracking') + + +def parse_args(): + """ Generate a tractogram from a trained recurrent model. """ + parser = argparse.ArgumentParser( + description=parse_args.__doc__, + formatter_class=RawTextHelpFormatter) + + add_experiment_args(parser) + add_model_args(parser) + add_reward_args(parser) + add_valid_args(parser) + add_tractometer_args(parser) + add_oracle_args(parser) + add_tracking_args(parser) + + arguments = parser.parse_args() + return arguments + + +def main(): + """ Main tracking script """ + args = parse_args() + print(args) + experiment = TrackToLearnValidation( + # Dataset params + vars(args), + ) + + experiment.run() + + +if __name__ == '__main__': + main() diff --git a/TrackToLearn/runners/ttl_validation.py b/TrackToLearn/runners/ttl_validation.py deleted file mode 100755 index 450f231..0000000 --- a/TrackToLearn/runners/ttl_validation.py +++ /dev/null @@ -1,303 +0,0 @@ -#!/usr/bin/env python -import argparse -import json -import nibabel as nib -import numpy as np -import random -import torch - -from argparse import RawTextHelpFormatter -from os.path import join as pjoin - -from dipy.tracking.metrics import length as slength -from dipy.io.stateful_tractogram import Space, StatefulTractogram -from dipy.io.utils import get_reference_info, create_tractogram_header - -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.algorithms.acktr import ACKTR -from TrackToLearn.algorithms.ddpg import DDPG -from TrackToLearn.algorithms.ppo import PPO -from TrackToLearn.algorithms.trpo import TRPO -from TrackToLearn.algorithms.td3 import TD3 -from TrackToLearn.algorithms.sac import SAC -from TrackToLearn.algorithms.sac_auto import SACAuto -from TrackToLearn.algorithms.vpg import VPG -from TrackToLearn.datasets.utils import MRIDataVolume -from TrackToLearn.experiment.experiment import ( - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.tracker import Tracker -from TrackToLearn.experiment.ttl import TrackToLearnExperiment - - -class TrackToLearnValidation(TrackToLearnExperiment): - """ TrackToLearn validing script. Should work on any model trained with a - TrackToLearn experiment. This runs tracking on a dataset (hdf5). - - TODO: Make this script as robust as the tracking. - """ - - def __init__( - self, - # Dataset params - valid_dto, - ): - """ - """ - self.experiment_path = valid_dto['path'] - self.experiment = valid_dto['experiment'] - self.id = valid_dto['id'] - self.render = False - - self.valid_dataset_file = self.dataset_file = valid_dto['dataset_file'] - self.valid_subject_id = self.subject_id = valid_dto['subject_id'] - self.reference_file = valid_dto['reference_file'] - self.scoring_data = valid_dto['scoring_data'] - self.prob = valid_dto['prob'] - self.policy = valid_dto['policy'] - self.n_actor = valid_dto['n_actor'] - self.npv = valid_dto['npv'] - self.min_length = valid_dto['min_length'] - self.max_length = valid_dto['max_length'] - self.compute_reward = False - self.run_tractometer = self.scoring_data is not None - - self.fa_map = None - if valid_dto['fa_map'] is not None: - fa_image = nib.load( - valid_dto['fa_map']) - self.fa_map = MRIDataVolume( - data=fa_image.get_fdata(), - affine_vox2rasmm=fa_image.affine) - - with open(valid_dto['hyperparameters'], 'r') as json_file: - hyperparams = json.load(json_file) - self.algorithm = hyperparams['algorithm'] - self.step_size = float(hyperparams['step_size']) - self.add_neighborhood = hyperparams['add_neighborhood'] - self.voxel_size = float(hyperparams['voxel_size']) - self.theta = hyperparams['max_angle'] - self.alignment_weighting = hyperparams['alignment_weighting'] - self.straightness_weighting = hyperparams['straightness_weighting'] - self.length_weighting = hyperparams['length_weighting'] - self.target_bonus_factor = hyperparams['target_bonus_factor'] - self.exclude_penalty_factor = hyperparams['exclude_penalty_factor'] - self.angle_penalty_factor = hyperparams['angle_penalty_factor'] - self.hidden_dims = hyperparams['hidden_dims'] - self.n_signal = hyperparams['n_signal'] - self.n_dirs = hyperparams['n_dirs'] - self.interface_seeding = hyperparams['interface_seeding'] - self.cmc = hyperparams.get('cmc', False) - self.asymmetric = hyperparams.get('asymmetric', False) - self.no_retrack = hyperparams.get('no_retrack', False) - - self.comet_experiment = None - self.remove_invalid_streamlines = valid_dto[ - 'remove_invalid_streamlines'] - - self.device = torch.device( - "cuda" if torch.cuda.is_available() and not valid_dto['cpu'] - else "cpu") - - self.random_seed = valid_dto['rng_seed'] - torch.manual_seed(self.random_seed) - np.random.seed(self.random_seed) - self.rng = np.random.RandomState(seed=self.random_seed) - random.seed(self.random_seed) - - def clean_tractogram(self, tractogram, affine_vox2mask): - """ - Remove potential "non-connections" by filtering according to - curvature, length and mask - - Parameters: - ----------- - tractogram: Tractogram - Full tractogram - - Returns: - -------- - tractogram: Tractogram - Filtered tractogram - """ - print('Cleaning tractogram ... ', end='', flush=True) - tractogram = tractogram.to_world() - - streamlines = tractogram.streamlines - lengths = [slength(s) for s in streamlines] - # # Filter by curvature - # dirty_mask = is_flag_set( - # stopping_flags, StoppingFlags.STOPPING_CURVATURE) - dirty_mask = np.zeros(len(streamlines)) - - # Filter by length unless the streamline ends in GM - # Example case: Bundle 3 of fibercup tends to be shorter than 35 - - short_lengths = np.asarray( - [lgt <= self.min_length for lgt in lengths]) - - dirty_mask = np.logical_or(short_lengths, dirty_mask) - - long_lengths = np.asarray( - [lgt > self.max_length for lgt in lengths]) - - dirty_mask = np.logical_or(long_lengths, dirty_mask) - - # Filter by loops - # For example: A streamline ending and starting in the same roi - # looping_mask = np.array([winding(s) for s in streamlines]) > 330 - # dirty_mask = np.logical_or( - # dirty_mask, - # looping_mask) - - # Only keep valid streamlines - valid_indices = np.nonzero(np.logical_not(dirty_mask)) - clean_streamlines = streamlines[valid_indices] - clean_dps = tractogram.data_per_streamline[valid_indices] - print('Done !') - - print('Kept {}/{} streamlines'.format(len(valid_indices[0]), - len(streamlines))) - sft = StatefulTractogram( - clean_streamlines, - self.reference_file, - space=Space.RASMM, - data_per_streamline=clean_dps) - # Rest of the code presumes vox space - sft.to_vox() - return sft - - def run(self): - """ - Main method where the magic happens - """ - # Instanciate environment. Actions will be fed to it and new - # states will be returned. The environment updates the streamline - # internally - - back_env, env = self.get_valid_envs() - - # Get example state to define NN input size - example_state = env.reset(0, 1) - self.input_size = example_state.shape[1] - self.action_size = env.get_action_size() - - # Set the voxel size so the agent traverses the same "quantity" of - # voxels per step as during training. - tracking_voxel_size = env.get_voxel_size() - step_size_mm = (tracking_voxel_size / self.voxel_size) * \ - self.step_size - - print("Agent was trained on a voxel size of {}mm and a " - "step size of {}mm.".format(self.voxel_size, self.step_size)) - - print("Subject has a voxel size of {}mm, setting step size to " - "{}mm.".format(tracking_voxel_size, step_size_mm)) - - if back_env: - back_env.set_step_size(step_size_mm) - env.set_step_size(step_size_mm) - - # Load agent - algs = {'VPG': VPG, - 'A2C': A2C, - 'ACKTR': ACKTR, - 'PPO': PPO, - 'TRPO': TRPO, - 'DDPG': DDPG, - 'TD3': TD3, - 'SAC': SAC, - 'SACAuto': SACAuto} - - rl_alg = algs[self.algorithm] - - # The RL training algorithm - alg = rl_alg( - self.input_size, - self.action_size, - self.hidden_dims, - n_actors=self.n_actor, - rng=self.rng, - device=self.device) - - # Load pretrained policies - alg.policy.load(self.policy, 'last_model_state') - - # Initialize Tracker, which will handle streamline generation - tracker = Tracker( - alg, env, back_env, self.n_actor, self.interface_seeding, - self.no_retrack, compress=0.0) - - # Run tracking - tractogram = tracker.track() - - tractogram.affine_to_rasmm = env.affine_vox2rasmm - - filename = pjoin( - self.experiment_path, "tractogram_{}_{}_{}.trk".format( - self.experiment, self.id, self.valid_subject_id)) - - filetype = nib.streamlines.detect_format(filename) - reference = get_reference_info(self.reference_file) - header = create_tractogram_header(filetype, *reference) - - # Use generator to save the streamlines on-the-fly - nib.streamlines.save(tractogram, filename, header=header) - - -def add_valid_args(parser): - parser.add_argument('dataset_file', - help='Path to preprocessed datset file (.hdf5)') - parser.add_argument('subject_id', - help='Subject id to fetch from the dataset file') - parser.add_argument('reference_file', - help='Path to binary seeding mask (.nii|.nii.gz)') - parser.add_argument('policy', - help='Path to the policy') - parser.add_argument('hyperparameters', - help='File containing the hyperparameters for the ' - 'experiment') - parser.add_argument('--scoring_data', default=None, - help='Path to tractometer files.') - parser.add_argument('--remove_invalid_streamlines', action='store_true') - parser.add_argument('--fa_map', type=str, default=None, - help='FA map to influence STD for probabilistic' + - 'tracking') - parser.add_argument('--valid_theta', type=float, default=None, - help='Max valid angle to override the model\'s own.') - parser.add_argument('--cpu', action='store_true', - help='Use CPU for tracking.') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_model_args(parser) - add_valid_args(parser) - add_environment_args(parser) - add_tracking_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - experiment = TrackToLearnValidation( - # Dataset params - vars(args), - ) - - experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/a2c_searcher.py b/TrackToLearn/searchers/a2c_searcher.py deleted file mode 100644 index fb89859..0000000 --- a/TrackToLearn/searchers/a2c_searcher.py +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.a2c_train import ( - parse_args, - A2CTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "entropy_loss_coeff": { - "type": "discrete", - "values": [0.001]}, - "lmbda": { - "type": "discrete", - "values": [0.95]}, - }, - - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "VC", - "objective": "maximize", - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config) - - for experiment in opt.get_experiments(project_name=args.experiment): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - lmbda = experiment.get_parameter("lmbda") - entropy_loss_coeff = experiment.get_parameter("entropy_loss_coeff") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'lmbda': lmbda, - 'entropy_loss_coeff': entropy_loss_coeff, - }) - a2c_experiment = A2CTrackToLearnTraining( - arguments, - experiment, - ) - a2c_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/acktr_searcher.py b/TrackToLearn/searchers/acktr_searcher.py deleted file mode 100644 index c93acc3..0000000 --- a/TrackToLearn/searchers/acktr_searcher.py +++ /dev/null @@ -1,87 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import traceback -import torch - -from TrackToLearn.trainers.acktr_train import ( - parse_args, - ACKTRTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [0.01, 0.1, 0.15, 0.2, 0.25]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "delta": { - "type": "discrete", - "values": [0.0001, 0.0005, 0.001, 0.005, 0.01]}, - "entropy_loss_coeff": { - "type": "discrete", - "values": [0.001]}, - "lmbda": { - "type": "discrete", - "values": [0.95]}, - }, - # Declare what we will be optimizing, and how: - "spec": { - "metric": "VC", - "objective": "maximize", - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - delta = experiment.get_parameter("delta") - lmbda = experiment.get_parameter("lmbda") - entropy_loss_coeff = experiment.get_parameter("entropy_loss_coeff") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'lmbda': lmbda, - 'entropy_loss_coeff': entropy_loss_coeff, - 'delta': delta - }) - # ACKTR is unstable learning - try: - acktr_experiment = ACKTRTrackToLearnTraining( - arguments, - experiment, - ) - acktr_experiment.run() - except RuntimeError as e: # noqa: F841 - traceback.print_exc() - except ValueError as v: # noqa: F841 - traceback.print_exc() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/ddpg_searcher.py b/TrackToLearn/searchers/ddpg_searcher.py deleted file mode 100644 index 4eabe69..0000000 --- a/TrackToLearn/searchers/ddpg_searcher.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.ddpg_train import ( - parse_args, - DDPGTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "action_std": { - "type": "discrete", - "values": [0.20, 0.25, 0.30, 0.35, 0.40]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - "seed": args.rng_seed, - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - action_std = experiment.get_parameter("action_std") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'action_std': action_std - }) - - ddpg_experiment = DDPGTrackToLearnTraining( - arguments, - experiment - ) - ddpg_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/ppo_searcher.py b/TrackToLearn/searchers/ppo_searcher.py deleted file mode 100644 index f439d87..0000000 --- a/TrackToLearn/searchers/ppo_searcher.py +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.ppo_train import ( - parse_args, - PPOTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "entropy_loss_coeff": { - "type": "discrete", - "values": [0.001]}, - "lmbda": { - "type": "discrete", - "values": [0.95]}, - "eps_clip": { - "type": "discrete", - "values": [0.05, 0.1, 0.2]}, - "K_epochs": { - "type": "discrete", - "values": [30]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - lmbda = experiment.get_parameter("lmbda") - entropy_loss_coeff = experiment.get_parameter("entropy_loss_coeff") - - K_epochs = experiment.get_parameter("K_epochs") - eps_clip = experiment.get_parameter("eps_clip") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'lmbda': lmbda, - 'entropy_loss_coeff': entropy_loss_coeff, - 'K_epochs': K_epochs, - 'eps_clip': eps_clip - }) - - ppo_experiment = PPOTrackToLearnTraining( - arguments, - experiment, - ) - ppo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/sac_auto_searcher.py b/TrackToLearn/searchers/sac_auto_searcher.py index 94ab7c6..d3ae92d 100644 --- a/TrackToLearn/searchers/sac_auto_searcher.py +++ b/TrackToLearn/searchers/sac_auto_searcher.py @@ -48,7 +48,7 @@ def main(): for experiment in opt.get_experiments(): experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' + experiment.workspace = args.workspace experiment.parse_args = False experiment.disabled = not args.use_comet diff --git a/TrackToLearn/searchers/sac_auto_searcher_len.py b/TrackToLearn/searchers/sac_auto_searcher_len.py deleted file mode 100644 index d128518..0000000 --- a/TrackToLearn/searchers/sac_auto_searcher_len.py +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.sac_auto_train import ( - parse_args, - SACAutoTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [1e-5, 5e-5, 1e-4, 5e-4, 5e-3, 1e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "alpha": { - "type": "discrete", - "values": [0.2]}, - "length_weighting": { - "type": "discrete", - "values": [0.01, 0.1, 0.5, 1., 5.]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - "seed": args.rng_seed, - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - alpha = experiment.get_parameter("alpha") - length_weighting = experiment.get_parameter("length_weighting") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'alpha': alpha, - 'length_weighting': length_weighting, - }) - - sac_experiment = SACAutoTrackToLearnTraining( - arguments, - experiment - ) - sac_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/vpg_searcher.py b/TrackToLearn/searchers/sac_auto_searcher_oracle.py similarity index 65% rename from TrackToLearn/searchers/vpg_searcher.py rename to TrackToLearn/searchers/sac_auto_searcher_oracle.py index 4df3351..5b7fd4f 100644 --- a/TrackToLearn/searchers/vpg_searcher.py +++ b/TrackToLearn/searchers/sac_auto_searcher_oracle.py @@ -2,9 +2,9 @@ import comet_ml # noqa: F401 ugh import torch -from TrackToLearn.trainers.vpg_train import ( +from TrackToLearn.trainers.sac_auto_train import ( parse_args, - VPGTrackToLearnTraining) + SACAutoTrackToLearnTraining) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") assert torch.cuda.is_available() @@ -23,22 +23,21 @@ def main(): # Declare your hyperparameters in the Vizier-inspired format: "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, "gamma": { "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "entropy_loss_coeff": { + "values": [0.90, 0.95, 0.98, 0.99]}, + "oracle_bonus": { "type": "discrete", - "values": [0.001]}, - }, - + "values": [1.0, 5.0, 7.0, 10.0]} + }, # Declare what we will be optimizing, and how: "spec": { "metric": "VC", "objective": "maximize", + "seed": args.rng_seed, + "retryLimit": 3, + "retryAssignLimit": 3, }, } @@ -47,25 +46,24 @@ def main(): for experiment in opt.get_experiments(project_name=args.experiment): experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' + experiment.workspace = args.workspace experiment.parse_args = False experiment.disabled = not args.use_comet - lr = experiment.get_parameter("lr") + oracle_bonus = experiment.get_parameter("oracle_bonus") gamma = experiment.get_parameter("gamma") - entropy_loss_coeff = experiment.get_parameter("entropy_loss_coeff") arguments = vars(args) arguments.update({ - 'lr': lr, + 'oracle_bonus': oracle_bonus, 'gamma': gamma, - 'entropy_loss_coeff': entropy_loss_coeff, }) - vpg_experiment = VPGTrackToLearnTraining( + + sac_experiment = SACAutoTrackToLearnTraining( arguments, - experiment, + experiment ) - vpg_experiment.run() + sac_experiment.run() if __name__ == '__main__': diff --git a/TrackToLearn/searchers/sac_auto_searcher_target.py b/TrackToLearn/searchers/sac_auto_searcher_target.py deleted file mode 100644 index 01214f3..0000000 --- a/TrackToLearn/searchers/sac_auto_searcher_target.py +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.sac_auto_train import ( - parse_args, - SACAutoTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [1e-5, 5e-5, 1e-4, 5e-4, 5e-3, 1e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "alpha": { - "type": "discrete", - "values": [0.2]}, - "target_weighting": { - "type": "discrete", - "values": [1., 10., 100.]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - "seed": args.rng_seed, - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - alpha = experiment.get_parameter("alpha") - target_weighting = experiment.get_parameter("target_weighting") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'alpha': alpha, - 'target_bonus_factor': target_weighting, - }) - - sac_experiment = SACAutoTrackToLearnTraining( - arguments, - experiment - ) - sac_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/sac_searcher.py b/TrackToLearn/searchers/sac_searcher.py deleted file mode 100644 index 182d34d..0000000 --- a/TrackToLearn/searchers/sac_searcher.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.sac_train import ( - parse_args, - SACTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "alpha": { - "type": "discrete", - "values": [0.075, 0.1, 0.15, 0.2, 0.3]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - "seed": args.rng_seed, - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - alpha = experiment.get_parameter("alpha") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'alpha': alpha - }) - - sac_experiment = SACTrackToLearnTraining( - arguments, - experiment - ) - sac_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/td3_searcher.py b/TrackToLearn/searchers/td3_searcher.py deleted file mode 100644 index 02d994c..0000000 --- a/TrackToLearn/searchers/td3_searcher.py +++ /dev/null @@ -1,73 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch - -from TrackToLearn.trainers.td3_train import ( - parse_args, - TD3TrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "action_std": { - "type": "discrete", - "values": [0.20, 0.25, 0.30, 0.35, 0.40]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "Reward", - "objective": "maximize", - "seed": args.rng_seed, - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config, project_name=args.experiment) - - for experiment in opt.get_experiments(): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - action_std = experiment.get_parameter("action_std") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'action_std': action_std - }) - - td3_experiment = TD3TrackToLearnTraining( - arguments, - experiment - ) - td3_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/searchers/trpo_searcher.py b/TrackToLearn/searchers/trpo_searcher.py deleted file mode 100644 index 81f8510..0000000 --- a/TrackToLearn/searchers/trpo_searcher.py +++ /dev/null @@ -1,98 +0,0 @@ -#!/usr/bin/env python -import comet_ml # noqa: F401 ugh -import torch -import traceback - -from TrackToLearn.trainers.trpo_train import ( - parse_args, - TRPOTrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - from comet_ml import Optimizer - - # We only need to specify the algorithm and hyperparameters to use: - - # We only need to specify the algorithm and hyperparameters to use: - config = { - # We pick the Bayes algorithm: - "algorithm": "grid", - - # Declare your hyperparameters in the Vizier-inspired format: - "parameters": { - "lr": { - "type": "discrete", - "values": [5e-5, 1e-5, 5e-4, 1e-4, 1e-3, 5e-3]}, - "gamma": { - "type": "discrete", - "values": [0.5, 0.75, 0.85, 0.90, 0.95, 0.99]}, - "entropy_loss_coeff": { - "type": "discrete", - "values": [0.001]}, - "lmbda": { - "type": "discrete", - "values": [0.95]}, - "delta": { - "type": "discrete", - "values": [0.001, 0.01, 0.1]}, - "K_epochs": { - "type": "discrete", - "values": [5]}, - }, - - # Declare what we will be optimizing, and how: - "spec": { - "metric": "VC", - "objective": "maximize", - }, - } - - # Next, create an optimizer, passing in the config: - opt = Optimizer(config) - - for experiment in opt.get_experiments(project_name=args.experiment): - experiment.auto_metric_logging = False - experiment.workspace = 'TrackToLearn' - experiment.parse_args = False - experiment.disabled = not args.use_comet - - lr = experiment.get_parameter("lr") - gamma = experiment.get_parameter("gamma") - lmbda = experiment.get_parameter("lmbda") - entropy_loss_coeff = experiment.get_parameter("entropy_loss_coeff") - - K_epochs = experiment.get_parameter("K_epochs") - delta = experiment.get_parameter("delta") - - arguments = vars(args) - arguments.update({ - 'lr': lr, - 'gamma': gamma, - 'lmbda': lmbda, - 'entropy_loss_coeff': entropy_loss_coeff, - 'K_epochs': K_epochs, - 'delta': delta - }) - - try: - trpo_experiment = TRPOTrackToLearnTraining( - arguments, - experiment, - ) - trpo_experiment.run() - except RuntimeError as e: # noqa: F841 - traceback.print_exc() - except ValueError as v: # noqa: F841 - traceback.print_exc() - except comet_ml.exceptions.InterruptedExperiment: - print('Experiment stopped by user') - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/tracking/tracker.py b/TrackToLearn/tracking/tracker.py new file mode 100644 index 0000000..14ec638 --- /dev/null +++ b/TrackToLearn/tracking/tracker.py @@ -0,0 +1,259 @@ +import numpy as np + +from collections import defaultdict +from nibabel.streamlines import TrkFile +from tqdm import tqdm +from typing import Tuple + +from dipy.tracking.streamlinespeed import compress_streamlines, length +from nibabel.streamlines import Tractogram +from nibabel.streamlines.tractogram import LazyTractogram +from nibabel.streamlines.tractogram import TractogramItem + +from TrackToLearn.algorithms.rl import RLAlgorithm +from TrackToLearn.algorithms.shared.utils import add_to_means +from TrackToLearn.environments.env import BaseEnv + + +class Tracker(object): + """ Tracking class similar to scilpy's or dwi_ml's. This class is + responsible for generating streamlines, as well as giving back training + or RL-associated metrics if applicable. + """ + + def __init__( + self, + alg: RLAlgorithm, + n_actor: int, + prob: float = 0., + compress: float = 0.0, + min_length: float = 20, + max_length: float = 200, + save_seeds: bool = False + ): + """ + + Parameters + ---------- + alg: RLAlgorithm + Tracking agent. + n_actor: int + Number of actors to track at once. + prob: float + Factor to influence the output of the agent. + compress: float + Compression factor when saving streamlines. + min_length: float + Minimum length of a streamline. + max_length: float + Maximum length of a streamline. + save_seeds: bool + Save seeds in the tractogram. + """ + + self.alg = alg + self.n_actor = n_actor + self.prob = prob + self.compress = compress + self.min_length = min_length + self.max_length = max_length + self.save_seeds = save_seeds + + def track( + self, + env: BaseEnv, + tracts_format + ): + """ Actual tracking function. Use this if you just want streamlines. + + Track with a generator to save streamlines to file + as they are tracked. Used at tracking (test) time. No + reward should be computed. + + Arguments + --------- + env : BaseEnv + Environment to track in. + tracts_format : TrkFile or TckFile + Tractogram format. + + Returns: + -------- + tractogram: Tractogram + Tractogram in a generator format. + + """ + + batch_size = self.n_actor + + self.alg.agent.eval() + affine = env.affine_vox2rasmm + + # Shuffle seeds so that massive tractograms wont load "sequentially" + # when partially displayed + np.random.shuffle(env.seeds) + + def tracking_generator(): + # Presume iso vox + vox_size = np.mean( + np.abs(affine)[np.diag_indices(4)][:3]) + scaled_min_length = self.min_length / vox_size + scaled_max_length = self.max_length / vox_size + + compress_th_vox = self.compress / vox_size + + # Track for every seed in the environment + for start in tqdm(range(0, len(env.seeds), batch_size)): + # Last batch might not be "full" + end = min(start + batch_size, len(env.seeds)) + + state = env.reset(start, end) + + # Track forward + self.alg.validation_episode( + state, env, self.prob) + + batch_tractogram = env.get_streamlines() + + for item in batch_tractogram: + streamline = item.streamline + if scaled_min_length <= length(streamline) \ + <= scaled_max_length: + + if self.compress: + streamline = compress_streamlines( + streamline, compress_th_vox) + + if tracts_format is TrkFile: + streamline += 0.5 + streamline *= vox_size + else: + # Streamlines are dumped in true world space with + # origin center as expected by .tck files. + streamline = np.dot( + streamline, + affine[:3, :3]) + \ + affine[:3, 3] + + # flag = item.data_for_streamline['flags'] + seed_dict = {} + if self.save_seeds: + seed = item.data_for_streamline['seeds'] + seed_dict = {'seeds': seed-0.5} + + yield TractogramItem( + streamline, seed_dict, {}) + + tractogram = LazyTractogram.from_data_func(tracking_generator) + tractogram.affine_to_rasmm = affine + + return tractogram + + def track_and_train( + self, + env: BaseEnv, + ) -> Tuple[Tractogram, float, float, float]: + """ + Call the main training loop forward then backward. + This can be considered an "epoch". Note that N=self.n_actor + streamlines will be tracked instead of one streamline per seed. + + Parameters + ---------- + env: BaseEnv + Environment to track in. + + Returns + ------- + train_tractogram: Tractogram + Tractogram generated during training. + mean_losses: dict + Mean losses during training. + reward: float + Total reward obtained during training. + mean_reward_factors: dict + Reward separated into its components. + """ + + self.alg.agent.train() + + mean_losses = defaultdict(list) + mean_reward_factors = defaultdict(list) + + # Fetch n=n_actor seeds + state = env.nreset(self.n_actor) + + # Track and train forward + reward, losses, length, reward_factors = \ + self.alg._episode(state, env) + # Get the streamlines generated from forward training + train_tractogram = env.get_streamlines() + + if len(losses.keys()) > 0: + mean_losses = add_to_means(mean_losses, losses) + if len(reward_factors.keys()) > 0: + mean_reward_factors = add_to_means( + mean_reward_factors, reward_factors) + + return ( + train_tractogram, + mean_losses, + reward, + mean_reward_factors) + + def track_and_validate( + self, + env: BaseEnv + ) -> Tuple[Tractogram, float, dict]: + """ + Run the tracking algorithm without training to see how it performs, but + still compute the reward. + + Parameters + ---------- + env: BaseEnv + Environment to track in. + + Returns: + -------- + tractogram: Tractogram + Validation tractogram. + cummulative_reward: float + Total reward obtained during validation. + """ + # Switch policy to eval mode so no gradients are computed + self.alg.agent.eval() + + # Initialize tractogram + tractogram = None + + # Reward gotten during validation + cummulative_reward = 0 + + def _generate_streamlines_and_rewards(): + + # Track for every seed in the environment + for i, start in enumerate( + tqdm(range(0, len(env.seeds), self.n_actor))): + + # Last batch might not be "full" + end = min(start + self.n_actor, len(env.seeds)) + + state = env.reset(start, end) + + # Track forward + reward = self.alg.validation_episode( + state, env, self.prob) + + batch_tractogram = env.get_streamlines() + + yield batch_tractogram, reward + + for t, r in _generate_streamlines_and_rewards(): + if tractogram is None and len(t) > 0: + tractogram = t + elif len(t) > 0: + tractogram += t + cummulative_reward += r + + return tractogram, cummulative_reward diff --git a/TrackToLearn/trainers/a2c_train.py b/TrackToLearn/trainers/a2c_train.py deleted file mode 100644 index 9d5f49c..0000000 --- a/TrackToLearn/trainers/a2c_train.py +++ /dev/null @@ -1,131 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter -from comet_ml import Experiment as CometExperiment - -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) -from TrackToLearn.trainers.vpg_train import add_vpg_args - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -assert torch.cuda.is_available() - - -class A2CTrackToLearnTraining(TrackToLearnTraining): - """ - Advantage Actor-Critic experiment. - """ - - def __init__( - self, - a2c_train_dto: dict, - comet_experiment: CometExperiment, - ): - """ - Parameters - ---------- - a2c_train_dto: dict - A2C training parameters - comet_experiment: CometExperiment - Allows for logging and experiment management. - """ - - super().__init__( - a2c_train_dto, - comet_experiment, - ) - - # A2C-specific parameters - self.action_std = a2c_train_dto['action_std'] - self.lmbda = a2c_train_dto['lmbda'] - self.entropy_loss_coeff = a2c_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add A2C-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'A2C', - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self, max_nb_steps: int): - # The RL training algorithm - alg = A2C( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - max_nb_steps, - self.n_actor, - self.rng, - device) - return alg - - -def add_a2c_args(parser): - add_vpg_args(parser) - parser.add_argument('--lmbda', default=0.95, type=float, - help='Lambda param for advantage discounting') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - - add_a2c_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, - auto_metric_logging=False, - disabled=not args.use_comet) - - # Finally, get experiments, and train your models: - a2c_experiment = A2CTrackToLearnTraining( - vars(args), - experiment, - ) - a2c_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/acktr_train.py b/TrackToLearn/trainers/acktr_train.py deleted file mode 100644 index 7eff2dd..0000000 --- a/TrackToLearn/trainers/acktr_train.py +++ /dev/null @@ -1,134 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter -from comet_ml import Experiment as CometExperiment - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.acktr import ACKTR -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class ACKTRTrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - acktr_train_dto: dict, - comet_experiment: CometExperiment, - ): - """ - Parameters - ---------- - acktr_train_dto: dict - ACKTR training parameters - comet_experiment: CometExperiment - Allows for logging and experiment management. - """ - - super().__init__( - acktr_train_dto, - comet_experiment, - ) - - # ACKTR-specific parameters - self.action_std = acktr_train_dto['action_std'] - self.lmbda = acktr_train_dto['lmbda'] - self.delta = acktr_train_dto['delta'] - self.entropy_loss_coeff = acktr_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add ACKTR-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'ACKTR', - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'delta': self.delta, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self, max_nb_steps: int): - # The RL training algorithm - alg = ACKTR( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - self.delta, - max_nb_steps, - self.n_actor, - self.rng, - device) - return alg - - -def add_actkr_args(parser): - parser.add_argument('--delta', default=0.001, type=float, - help='KL clip parameter') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - - add_a2c_args(parser) - add_actkr_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, - auto_metric_logging=False, - disabled=not args.use_comet) - - # Finally, get experiments, and train your models: - actkr_experiment = ACKTRTrackToLearnTraining( - # Dataset params - vars(args), - experiment, - ) - actkr_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/ddpg_train.py b/TrackToLearn/trainers/ddpg_train.py index 5d42b16..14996e9 100644 --- a/TrackToLearn/trainers/ddpg_train.py +++ b/TrackToLearn/trainers/ddpg_train.py @@ -1,29 +1,24 @@ #!/usr/bin/env python import argparse +from argparse import RawTextHelpFormatter + import comet_ml # noqa: F401 ugh import torch - -from argparse import RawTextHelpFormatter from comet_ml import Experiment as CometExperiment from TrackToLearn.algorithms.ddpg import DDPG -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) + add_training_args, TrackToLearnTraining) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") assert torch.cuda.is_available() class DDPGTrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment + """ WARNING: DDPG is no longer supported. No support will be provied. + The code is left as example and for legacy purposes. + + Train a RL tracking agent using DDPG. """ def __init__( @@ -47,6 +42,8 @@ def __init__( # DDPG-specific parameters self.action_std = ddpg_train_dto['action_std'] + self.batch_size = ddpg_train_dto['batch_size'] + self.replay_size = ddpg_train_dto['replay_size'] def save_hyperparameters(self): """ Add DDPG-specific hyperparameters to self.hyperparameters @@ -55,7 +52,9 @@ def save_hyperparameters(self): self.hyperparameters.update( {'algorithm': 'DDPG', - 'action_std': self.action_std}) + 'action_std': self.action_std, + 'batch_size': self.batch_size, + 'replay_size': self.replay_size}) super().save_hyperparameters() @@ -68,14 +67,21 @@ def get_alg(self, max_nb_steps: int): self.lr, self.gamma, self.n_actor, + self.batch_size, + self.replay_size, self.rng, device) return alg def add_ddpg_args(parser): - parser.add_argument('--action_std', default=0.3, type=float, + parser.add_argument('--action_std', default=0.35, type=float, help='Action STD') + parser.add_argument('--batch_size', default=2**12, type=int, + help='How many tuples to sample from the replay ' + 'buffer.') + parser.add_argument('--replay_size', default=1e6, type=int, + help='How many tuples to store in the replay buffer.') def parse_args(): @@ -84,14 +90,7 @@ def parse_args(): description=parse_args.__doc__, formatter_class=RawTextHelpFormatter) - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - + add_training_args(parser) add_ddpg_args(parser) arguments = parser.parse_args() @@ -100,11 +99,14 @@ def parse_args(): def main(): """ Main tracking script """ + + raise DeprecationWarning('Training with DDPG is deprecated. Please train ' + 'using SAC Auto instead.') args = parse_args() print(args) experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, + workspace=args.workspace, parse_args=False, auto_metric_logging=False, disabled=not args.use_comet) diff --git a/TrackToLearn/trainers/gym/a2c_gym.py b/TrackToLearn/trainers/gym/a2c_gym.py deleted file mode 100644 index 1afce34..0000000 --- a/TrackToLearn/trainers/gym/a2c_gym.py +++ /dev/null @@ -1,111 +0,0 @@ -#!/usr/bin/env python -import argparse -import torch - -from argparse import RawTextHelpFormatter - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.a2c import A2C -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class A2CGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - a2c_train_dto: dict, - ): - """ - Parameters - ---------- - a2c_train_dto: dict - A2C training parameters - """ - - super().__init__( - a2c_train_dto, - ) - - # A2C-specific parameters - self.action_std = a2c_train_dto['action_std'] - self.n_update = a2c_train_dto['n_update'] - self.lmbda = a2c_train_dto['lmbda'] - self.entropy_loss_coeff = a2c_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add A2C-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'A2C', - 'action_std': self.action_std, - 'n_update': self.n_update, - 'lmbda': self.lmbda, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self): - # The RL training algorithm - alg = A2C( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - self.n_update, - self.n_actor, - self.rng, - device) - return alg - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - - add_a2c_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - a2c_experiment = A2CGymTraining( - vars(args) - ) - a2c_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/gym/acktr_gym.py b/TrackToLearn/trainers/gym/acktr_gym.py deleted file mode 100644 index 4cdcfc1..0000000 --- a/TrackToLearn/trainers/gym/acktr_gym.py +++ /dev/null @@ -1,122 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.acktr import ACKTR -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class ACKTRGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - acktr_train_dto: dict, - ): - """ - Parameters - ---------- - acktr_train_dto: dict - ACKTR training parameters - """ - - super().__init__( - acktr_train_dto, - ) - - # ACKTR-specific parameters - self.action_std = acktr_train_dto['action_std'] - self.n_update = acktr_train_dto['n_update'] - self.lmbda = acktr_train_dto['lmbda'] - self.delta = acktr_train_dto['delta'] - self.entropy_loss_coeff = acktr_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add ACKTR-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'ACKTR', - 'n_update': self.n_update, - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'delta': self.delta, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self): - # The RL training algorithm - alg = ACKTR( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - self.delta, - self.n_update, - self.n_actor, - self.rng, - device) - return alg - - -def add_actkr_args(parser): - parser.add_argument('--delta', default=0.001, type=float, - help='KL clip parameter') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - - add_a2c_args(parser) - add_actkr_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - actkr_experiment = ACKTRGymTraining( - # Dataset params - vars(args), - ) - actkr_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/gym/gym_exp.py b/TrackToLearn/trainers/gym/gym_exp.py deleted file mode 100644 index 03e99c0..0000000 --- a/TrackToLearn/trainers/gym/gym_exp.py +++ /dev/null @@ -1,127 +0,0 @@ -import os -import torch - -from os.path import join as pjoin - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.environments.env import BaseEnv -from TrackToLearn.environments.gym.gym_env import GymWrapper -from TrackToLearn.experiment.experiment import Experiment - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class GymExperiment(Experiment): - """ - """ - - def run(self): - """ Main method where data is loaded, classes are instanciated, - everything is set up. - """ - pass - - def setup_monitors(self): - # RL monitors - pass - - def setup_comet(self, prefix=''): - """ Setup comet environment - """ - pass - - def get_envs(self) -> BaseEnv: - """ Build environment - - Returns: - -------- - env: BaseEnv - - """ - kwargs = {} - if self.render: - kwargs.update({'render_mode': 'human'}) - env = GymWrapper(self.env_name, self.n_actor, **kwargs) - return env - - def get_valid_envs(self) -> BaseEnv: - """ Build environment - - Returns: - -------- - env: BaseEnv - """ - - env = GymWrapper(self.env_name, 10) - return env - - def valid( - self, - alg: RLAlgorithm, - env: BaseEnv, - save_model: bool = True, - ) -> float: - """ - Run the tracking algorithm without noise to see how it performs - - Parameters - ---------- - alg: RLAlgorithm - Tracking algorithm that contains the being-trained policy - env: BaseEnv - Forward environment - save_model: bool - Save the model or not - - Returns: - -------- - reward: float - Reward obtained during validation - """ - - # Save the model so it can be loaded by the tracking - if save_model: - - directory = pjoin(self.experiment_path, "model") - if not os.path.exists(directory): - os.makedirs(directory) - alg.policy.save(directory, "last_model_state") - - # Launch the tracking - reward = alg.gym_validation( - env, self.render) - - return reward - - def display( - self, - env: BaseEnv, - valid_reward: float = 0, - i_episode: int = 0, - ): - """ - Stats stuff - - There's so much going on in this function, it should be split or - something - - Parameters - ---------- - valid_tractogram: Tractogram - Tractogram containing all the streamlines tracked during the last - validation run - env: BaseEnv - Environment used to render streamlines - valid_reward: np.ndarray of float of size - Reward of the last validation run - i_episode: int - Current episode - """ - - print('---------------------------------------------------') - print(self.experiment_path) - print('Episode {} \t total reward: {}'.format( - i_episode, - valid_reward)) - print('---------------------------------------------------') diff --git a/TrackToLearn/trainers/gym/gym_train.py b/TrackToLearn/trainers/gym/gym_train.py deleted file mode 100644 index d604e3c..0000000 --- a/TrackToLearn/trainers/gym/gym_train.py +++ /dev/null @@ -1,206 +0,0 @@ -import json -import numpy as np -import random -import os -import torch - -from os.path import join as pjoin - -from TrackToLearn.algorithms.rl import RLAlgorithm -from TrackToLearn.environments.env import BaseEnv -from TrackToLearn.trainers.gym.gym_exp import GymExperiment - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class GymTraining(GymExperiment): - """ - Main RL tracking experiment - """ - - def __init__( - self, - # Dataset params - train_dto: dict, - ): - """ - Parameters - ---------- - train_dto: dict - Dictionnary containing the training parameters. - Put into a dictionnary to prevent parameter errors if modified. - """ - self.experiment_path = train_dto['path'] - self.experiment = train_dto['experiment'] - self.id = train_dto['id'] - self.env_name = train_dto['env_name'] - - # RL parameters - self.max_ep = train_dto['max_ep'] - self.log_interval = train_dto['log_interval'] - self.lr = train_dto['lr'] - self.gamma = train_dto['gamma'] - - # Tracking parameters - self.rng_seed = train_dto['rng_seed'] - self.n_actor = train_dto['n_actor'] - - # Model parameters - self.use_gpu = train_dto['use_gpu'] - self.hidden_dims = train_dto['hidden_dims'] - self.render = train_dto['render'] - self.last_episode = 0 - - # RNG - torch.manual_seed(self.rng_seed) - np.random.seed(self.rng_seed) - self.rng = np.random.RandomState(seed=self.rng_seed) - random.seed(self.rng_seed) - - directory = pjoin(self.experiment_path, 'model') - if not os.path.exists(directory): - os.makedirs(directory) - - self.hyperparameters = { - # RL parameters - 'id': self.id, - 'experiment': self.experiment, - 'max_ep': self.max_ep, - 'log_interval': self.log_interval, - 'lr': self.lr, - 'gamma': self.gamma, - # Data parameters - # Model parameters - 'experiment_path': self.experiment_path, - 'use_gpu': self.use_gpu, - 'hidden_dims': self.hidden_dims, - 'last_episode': self.last_episode, - } - - def save_hyperparameters(self): - - self.hyperparameters.update({'input_size': self.input_size, - 'action_size': self.action_size}) - directory = pjoin(self.experiment_path, "model") - with open( - pjoin(directory, "hyperparameters.json"), - 'w' - ) as json_file: - json_file.write( - json.dumps( - self.hyperparameters, - indent=4, - separators=(',', ': '))) - - def rl_train( - self, - alg: RLAlgorithm, - env: BaseEnv, - ): - """ Train the RL algorithm for N epochs. An epoch here corresponds to - running tracking on the training set until all streamlines are done. - This loop should be algorithm-agnostic. Between epochs, report stats - so they can be monitored during training - - Parameters: - ----------- - alg: RLAlgorithm - The RL algorithm, either TD3, PPO or any others - env: BaseEnv - The tracking environment - back_env: BaseEnv - The backward tracking environment. Should be more or less - the same as the "forward" tracking environment but initalized - with half-streamlines - """ - # Tractogram containing all the episodes. Might be useful I guess - # Run the valid before training to see what an untrained network does - valid_reward = self.valid( - alg, env) - - # Display the results of the untrained network - self.display(env, valid_reward/self.n_actor, 0) - - # Current epoch - i_episode = 0 - # Transition counter - t = 0 - - # Main training loop - while i_episode < self.max_ep: - - # Last episode/epoch. Was initially for resuming experiments but - # since they take so little time I just restart them from scratch - # Not sure what to do with this - self.last_episode = i_episode - - # Run the episode - losses, reward, episode_length = \ - alg.gym_train(env) - - reward /= self.n_actor - - # Keep track of how many transitions were gathered - t += episode_length - - print( - f"Total T: {t+1} Episode Num: {i_episode+1} " - f"Episode T: {episode_length} Reward: {reward:.3f}") - print(losses) - - i_episode += 1 - - # Time to do a valid run and display stats - if i_episode % self.log_interval == 0: - - # Validation run - valid_reward = self.valid( - alg, env) - - # Display what the network is capable-of "now" - self.display( - env, - valid_reward / self.n_actor, - i_episode) - - # Validation run - valid_reward = self.valid( - alg, env) - - # Display what the network is capable-of "now" - self.display( - env, - valid_reward, - i_episode) - - def run(self): - """ - Main method where the magic happens - """ - - # Instanciate environment. Actions will be fed to it and new - # states will be returned. The environment updates the streamline - # internally - env = self.get_envs() - # Get example state to define NN input size - example_state = env.reset() - self.input_size = example_state.shape[1] - self.n_trajectories = example_state.shape[0] - self.action_size = env._inner_envs[0].action_space.shape[0] - - # The RL training algorithm - alg = self.get_alg() - - # Save hyperparameters to differentiate experiments later - self.save_hyperparameters() - - # Start training ! - self.rl_train(alg, env) - - torch.cuda.empty_cache() - - -def add_environment_args(parser): - parser.add_argument('env_name', type=str, - help='Gym env name') diff --git a/TrackToLearn/trainers/gym/ppo_gym.py b/TrackToLearn/trainers/gym/ppo_gym.py deleted file mode 100644 index 0a81245..0000000 --- a/TrackToLearn/trainers/gym/ppo_gym.py +++ /dev/null @@ -1,126 +0,0 @@ -#!/usr/bin/env python -import argparse -import torch - -from argparse import RawTextHelpFormatter - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.ppo import PPO -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert (torch.cuda.is_available()) - - -class PPOGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - ppo_train_dto: dict, - ): - """ - Parameters - ---------- - ppo_train_dto: dict - PPO training parameters - """ - - super().__init__( - ppo_train_dto, - ) - - # PPO-specific parameters - self.action_std = ppo_train_dto['action_std'] - self.n_update = ppo_train_dto['n_update'] - self.lmbda = ppo_train_dto['lmbda'] - self.eps_clip = ppo_train_dto['eps_clip'] - self.K_epochs = ppo_train_dto['K_epochs'] - self.entropy_loss_coeff = ppo_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add PPO-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'PPO', - 'n_update': self.n_update, - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'eps_clip': self.eps_clip, - 'K_epochs': self.K_epochs, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self): - # The RL training algorithm - alg = PPO( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.K_epochs, - self.n_update, - self.eps_clip, - self.entropy_loss_coeff, - self.n_actor, - self.rng, - device) - return alg - - -def add_ppo_args(parser): - parser.add_argument('--eps_clip', default=0.001, type=float, - help='Clipping parameter for PPO') - parser.add_argument('--K_epochs', default=1, type=int, - help='Train the model for K epochs') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - - add_a2c_args(parser) - add_ppo_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - ppo_experiment = PPOGymTraining( - # Dataset params - vars(args) - ) - ppo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/gym/sac_auto_gym.py b/TrackToLearn/trainers/gym/sac_auto_gym.py deleted file mode 100644 index b21fd38..0000000 --- a/TrackToLearn/trainers/gym/sac_auto_gym.py +++ /dev/null @@ -1,264 +0,0 @@ -#!/usr/bin/env python -import argparse -import json -import torch - -from argparse import RawTextHelpFormatter -from os.path import join as pjoin - -from TrackToLearn.algorithms.sac_auto import SACAuto -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert(torch.cuda.is_available()) - - -class SAC_AutoGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - # Dataset params - path: str, - experiment: str, - name: str, - env_name: str, - # RL params - max_ep: int, - log_interval: int, - action_std: float, - lr: float, - gamma: float, - alpha: float, - # Model params - n_latent_var: int, - hidden_layers: int, - # Experiment params - use_gpu: bool, - rng_seed: int, - render: bool, - ): - """ - Parameters - ---------- - dataset_file: str - Path to the file containing the signal data - subject_id: str - Subject being trained on (in the signal data) - in_seed: str - Path to the mask where seeds can be generated - in_mask: str - Path to the mask where tracking can happen - scoring_data: str - Path to reference streamlines that can be used for - jumpstarting seeds - max_ep: int - How many episodes to run the training. - An episode corresponds to tracking two-ways on one seed and - training along the way - log_interval: int - Interval at which a valid run is done - action_std: float - Starting standard deviation on actions for exploration - lr: float - Learning rate for optimizer - gamma: float - Gamma parameter future reward discounting - lmbda: float - Lambda parameter for Generalized Advantage Estimation (GAE): - John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan: - “High-Dimensional Continuous Control Using Generalized - Advantage Estimation”, 2015; - http://arxiv.org/abs/1506.02438 arXiv:1506.02438 - K_epochs: int - How many epochs to run the optimizer using the current samples - SAC_Auto allows for many training runs on the same samples - eps_clip: float - Clipping parameter for SAC_Auto - rng_seed: int - Seed for general randomness - entropy_loss_coeff: float, - Loss coefficient on policy entropy - Should sum to 1 with other loss coefficients - npv: int - How many seeds to generate per voxel - theta: float - Maximum angle for tracking - min_length: int - Minimum length for streamlines - max_length: int - Maximum length for streamlines - step_size: float - Step size for tracking - alignment_weighting: float - Reward coefficient for alignment with local odfs peaks - straightness_weighting: float - Reward coefficient for streamline straightness - length_weighting: float - Reward coefficient for streamline length - target_bonus_factor: `float` - Bonus for streamlines reaching the target mask - exclude_penalty_factor: `float` - Penalty for streamlines reaching the exclusion mask - angle_penalty_factor: `float` - Penalty for looping or too-curvy streamlines - n_actor: int - Batch size for tracking during valid - n_latent_var: int - Width of the NN layers - add_neighborhood: float - Use signal in neighboring voxels for model input - # Experiment params - use_comet: bool - Use comet for displaying stats during training - render: bool - Render tracking - run_tractometer: bool - Run tractometer during validation to see how it's - doing w.r.t. ground truth data - use_gpu: bool, - Use GPU for processing - rng_seed: int - Seed for general randomness - load_teacher: str - Path to pretrained model for imitation learning - load_policy: str - Path to pretrained policy - """ - - super().__init__( - # Dataset params - path, - experiment, - name, - env_name, - # SAC_Auto params - max_ep, - log_interval, - action_std, - lr, - gamma, - # Model params - n_latent_var, - hidden_layers, - # Experiment params - use_gpu, - rng_seed, - render, - ) - - self.alpha = alpha - - def save_hyperparameters(self): - self.hyperparameters = { - # RL parameters - 'id': self.name, - 'experiment': self.experiment, - 'algorithm': 'SAC_Auto', - 'max_ep': self.max_ep, - 'log_interval': self.log_interval, - 'action_std': self.action_std, - 'lr': self.lr, - 'gamma': self.gamma, - # Data parameters - 'input_size': self.input_size, - # Model parameters - 'experiment_path': self.experiment_path, - 'use_gpu': self.use_gpu, - 'hidden_size': self.n_latent_var, - 'hidden_layers': self.hidden_layers, - 'last_episode': self.last_episode, - } - - directory = pjoin(self.experiment_path, "model") - with open( - pjoin(directory, "hyperparameters.json"), - 'w' - ) as json_file: - json_file.write( - json.dumps( - self.hyperparameters, - indent=4, - separators=(',', ': '))) - - def get_alg(self): - # The RL training algorithm - alg = SACAuto( - self.input_size, - self.action_size, - self.n_latent_var, - self.hidden_layers, - self.lr, - self.gamma, - self.alpha, - 1, - False, - self.rng, - device) - return alg - - -def add_sac_args(parser): - parser.add_argument('--alpha', default=0.2, type=float, - help='Temperature parameter') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_sac_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - trpo_experiment = SAC_AutoGymTraining( - # Dataset params - args.path, - args.experiment, - args.name, - args.env_name, - # RL params - args.max_ep, - args.log_interval, - args.alpha, - # RL Params - args.lr, - args.gamma, - args.alpha, - # Model params - args.n_latent_var, - args.hidden_layers, - # Experiment params - args.use_gpu, - args.rng_seed, - args.render, - ) - trpo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/gym/trpo_gym.py b/TrackToLearn/trainers/gym/trpo_gym.py deleted file mode 100644 index 50da768..0000000 --- a/TrackToLearn/trainers/gym/trpo_gym.py +++ /dev/null @@ -1,127 +0,0 @@ -#!/usr/bin/env python -import argparse -import torch - -from argparse import RawTextHelpFormatter - -from TrackToLearn.algorithms.trpo import TRPO -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) -from TrackToLearn.trainers.trpo_train import ( - add_trpo_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class TRPOGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - trpo_train_dto: dict, - ): - """ - Parameters - ---------- - trpo_train_dto: dict - TRPO training parameters - """ - - super().__init__( - trpo_train_dto, - ) - - # TRPO-specific parameters - self.action_std = trpo_train_dto['action_std'] - self.n_update = trpo_train_dto['n_update'] - self.lmbda = trpo_train_dto['lmbda'] - self.delta = trpo_train_dto['delta'] - self.max_backtracks = trpo_train_dto['max_backtracks'] - self.backtrack_coeff = trpo_train_dto['backtrack_coeff'] - self.K_epochs = trpo_train_dto['K_epochs'] - self.entropy_loss_coeff = trpo_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add TRPO-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'TRPO', - 'n_update': self.n_update, - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'delta': self.delta, - 'max_backtracks': self.max_backtracks, - 'backtrack_coeff': self.backtrack_coeff, - 'K_epochs': self.K_epochs, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self): - # The RL training algorithm - alg = TRPO( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - self.delta, - self.max_backtracks, - self.backtrack_coeff, - self.n_update, - self.K_epochs, - self.n_actor, - self.rng, - device) - return alg - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - - add_a2c_args(parser) - add_trpo_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - trpo_experiment = TRPOGymTraining( - # Dataset params - vars(args), - ) - trpo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/gym/vpg_gym.py b/TrackToLearn/trainers/gym/vpg_gym.py deleted file mode 100644 index 3dfb15c..0000000 --- a/TrackToLearn/trainers/gym/vpg_gym.py +++ /dev/null @@ -1,107 +0,0 @@ -#!/usr/bin/env python -import argparse -import torch - -from argparse import RawTextHelpFormatter - -from TrackToLearn.trainers.vpg_train import add_vpg_args -from TrackToLearn.algorithms.vpg import VPG -from TrackToLearn.experiment.experiment import ( - add_experiment_args, - add_model_args) -from TrackToLearn.experiment.train import ( - add_rl_args) -from TrackToLearn.trainers.gym.gym_train import ( - GymTraining, - add_environment_args) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class VPGGymTraining(GymTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - vpg_train_dto: dict, - ): - """ - Parameters - ---------- - vpg_train_dto: dict - VPG training parameters - """ - - super().__init__( - vpg_train_dto - ) - - # VPG-specific parameters - self.action_std = vpg_train_dto['action_std'] - self.n_update = vpg_train_dto['n_update'] - self.entropy_loss_coeff = vpg_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add VPG-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'VPG', - 'n_update': self.n_update, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self): - # The RL training algorithm - alg = VPG( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.entropy_loss_coeff, - self.n_update, - self.n_actor, - self.rng, - device) - return alg - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - - add_vpg_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - # Finally, get experiments, and train your models: - vpg_experiment = VPGGymTraining( - vars(args), - ) - vpg_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/ppo_train.py b/TrackToLearn/trainers/ppo_train.py deleted file mode 100644 index f7fed66..0000000 --- a/TrackToLearn/trainers/ppo_train.py +++ /dev/null @@ -1,139 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter -from comet_ml import Experiment as CometExperiment - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.ppo import PPO -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class PPOTrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - ppo_train_dto: dict, - comet_experiment: CometExperiment, - ): - """ - Parameters - ---------- - ppo_train_dto: dict - PPO training parameters - comet_experiment: CometExperiment - Allows for logging and experiment management. - """ - - super().__init__( - ppo_train_dto, - comet_experiment, - ) - - # PPO-specific parameters - self.action_std = ppo_train_dto['action_std'] - self.lmbda = ppo_train_dto['lmbda'] - self.eps_clip = ppo_train_dto['eps_clip'] - self.K_epochs = ppo_train_dto['K_epochs'] - self.entropy_loss_coeff = ppo_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add PPO-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'PPO', - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'eps_clip': self.eps_clip, - 'K_epochs': self.K_epochs, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self, max_nb_steps: int): - # The RL training algorithm - alg = PPO( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.K_epochs, - self.eps_clip, - self.entropy_loss_coeff, - max_nb_steps, - self.n_actor, - self.rng, - device) - return alg - - -def add_ppo_args(parser): - parser.add_argument('--K_epochs', default=50, type=int, - help='Train the model for K epochs') - parser.add_argument('--eps_clip', default=0.2, type=float, - help='Clipping parameter for PPO') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - - add_a2c_args(parser) - add_ppo_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, - auto_metric_logging=False, - disabled=not args.use_comet) - - # Finally, get experiments, and train your models: - ppo_experiment = PPOTrackToLearnTraining( - # Dataset params - vars(args), - experiment - ) - ppo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/sac_auto_train.py b/TrackToLearn/trainers/sac_auto_train.py old mode 100644 new mode 100755 index f87e8a3..7693704 --- a/TrackToLearn/trainers/sac_auto_train.py +++ b/TrackToLearn/trainers/sac_auto_train.py @@ -1,30 +1,23 @@ -#!/usr/bin/env python +#! /usr/bin/env python3 +# -*- coding: utf-8 -*- import argparse +from argparse import RawTextHelpFormatter + import comet_ml # noqa: F401 ugh import torch - -from argparse import RawTextHelpFormatter from comet_ml import Experiment as CometExperiment from TrackToLearn.algorithms.sac_auto import SACAuto -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) +from TrackToLearn.trainers.train import (TrackToLearnTraining, + add_training_args) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() class SACAutoTrackToLearnTraining(TrackToLearnTraining): """ - Main RL tracking experiment + Train a RL tracking agent using SAC with automatic entropy adjustment. """ def __init__( @@ -36,9 +29,9 @@ def __init__( Parameters ---------- sac_auto_train_dto: dict - SACAuto training parameters + SACAuto training parameters comet_experiment: CometExperiment - Allows for logging and experiment management. + Allows for logging and experiment management. """ super().__init__( @@ -48,6 +41,8 @@ def __init__( # SACAuto-specific parameters self.alpha = sac_auto_train_dto['alpha'] + self.batch_size = sac_auto_train_dto['batch_size'] + self.replay_size = sac_auto_train_dto['replay_size'] def save_hyperparameters(self): """ Add SACAuto-specific hyperparameters to self.hyperparameters @@ -56,7 +51,9 @@ def save_hyperparameters(self): self.hyperparameters.update( {'algorithm': 'SACAuto', - 'alpha': self.alpha}) + 'alpha': self.alpha, + 'batch_size': self.batch_size, + 'replay_size': self.replay_size}) super().save_hyperparameters() @@ -69,6 +66,8 @@ def get_alg(self, max_nb_steps: int): self.gamma, self.alpha, self.n_actor, + self.batch_size, + self.replay_size, self.rng, device) return alg @@ -76,23 +75,20 @@ def get_alg(self, max_nb_steps: int): def add_sac_auto_args(parser): parser.add_argument('--alpha', default=0.2, type=float, - help='Temperature parameter') + help='Initial temperature parameter') + parser.add_argument('--batch_size', default=2**12, type=int, + help='How many tuples to sample from the replay ' + 'buffer.') + parser.add_argument('--replay_size', default=1e6, type=int, + help='How many tuples to store in the replay buffer.') def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ + """ Generate a tractogram from a trained model. """ parser = argparse.ArgumentParser( description=parse_args.__doc__, formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - + add_training_args(parser) add_sac_auto_args(parser) arguments = parser.parse_args() @@ -104,11 +100,13 @@ def main(): args = parse_args() print(args) + # Create comet-ml experiment experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, + workspace=args.workspace, parse_args=False, auto_metric_logging=False, disabled=not args.use_comet) + # Create and run experiment sac_auto_experiment = SACAutoTrackToLearnTraining( # Dataset params vars(args), diff --git a/TrackToLearn/trainers/sac_train.py b/TrackToLearn/trainers/sac_train.py index 974d8d0..dedf04f 100644 --- a/TrackToLearn/trainers/sac_train.py +++ b/TrackToLearn/trainers/sac_train.py @@ -22,8 +22,11 @@ class SACTrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment + """ WARNING: `SAC Auto` is still supported but SAC is not. + No support will be provided. The code is left as example and + for legacy purposes. + + Train a RL tracking agent using SAC. """ def __init__( @@ -47,6 +50,8 @@ def __init__( # SAC-specific parameters self.alpha = sac_train_dto['alpha'] + self.batch_size = sac_train_dto['batch_size'] + self.replay_size = sac_train_dto['replay_size'] def save_hyperparameters(self): """ Add SAC-specific hyperparameters to self.hyperparameters @@ -55,7 +60,9 @@ def save_hyperparameters(self): self.hyperparameters.update( {'algorithm': 'SAC', - 'alpha': self.alpha}) + 'alpha': self.alpha, + 'batch_size': self.batch_size, + 'replay_size': self.replay_size}) super().save_hyperparameters() @@ -68,6 +75,8 @@ def get_alg(self, max_nb_steps: int): self.gamma, self.alpha, self.n_actor, + self.batch_size, + self.replay_size, self.rng, device) return alg @@ -76,6 +85,11 @@ def get_alg(self, max_nb_steps: int): def add_sac_args(parser): parser.add_argument('--alpha', default=0.2, type=float, help='Temperature parameter') + parser.add_argument('--batch_size', default=2**12, type=int, + help='How many tuples to sample from the replay ' + 'buffer.') + parser.add_argument('--replay_size', default=1e6, type=int, + help='How many tuples to store in the replay buffer.') def parse_args(): @@ -100,11 +114,14 @@ def parse_args(): def main(): """ Main tracking script """ + raise DeprecationWarning('Training with SAC is deprecated. Please train ' + 'using SAC Auto instead.') + args = parse_args() print(args) experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, + workspace=args.workspace, parse_args=False, auto_metric_logging=False, disabled=not args.use_comet) diff --git a/TrackToLearn/trainers/td3_train.py b/TrackToLearn/trainers/td3_train.py index a99d19e..690fbe2 100644 --- a/TrackToLearn/trainers/td3_train.py +++ b/TrackToLearn/trainers/td3_train.py @@ -22,8 +22,10 @@ class TD3TrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment + """ WARNING: TD3 is no longer supported. No support will be provied. + The code is left as example and for legacy purposes. + + Train a RL tracking agent using TD3. """ def __init__( @@ -47,6 +49,8 @@ def __init__( # TD3-specific parameters self.action_std = td3_train_dto['action_std'] + self.batch_size = td3_train_dto['batch_size'] + self.replay_size = td3_train_dto['replay_size'] def save_hyperparameters(self): """ Add TD3-specific hyperparameters to self.hyperparameters @@ -55,7 +59,9 @@ def save_hyperparameters(self): self.hyperparameters.update( {'algorithm': 'TD3', - 'action_std': self.action_std}) + 'action_std': self.action_std, + 'batch_size': self.batch_size, + 'replay_size': self.replay_size}) super().save_hyperparameters() @@ -68,6 +74,8 @@ def get_alg(self, max_nb_steps: int): self.lr, self.gamma, self.n_actor, + self.batch_size, + self.replay_size, self.rng, device) return alg @@ -76,6 +84,11 @@ def get_alg(self, max_nb_steps: int): def add_td3_args(parser): parser.add_argument('--action_std', default=0.3, type=float, help='Action STD') + parser.add_argument('--batch_size', default=2**12, type=int, + help='How many tuples to sample from the replay ' + 'buffer.') + parser.add_argument('--replay_size', default=1e6, type=int, + help='How many tuples to store in the replay buffer.') def parse_args(): @@ -102,9 +115,11 @@ def main(): """ Main tracking script """ args = parse_args() print(args) + raise DeprecationWarning('Training with TD3 is deprecated. Please train ' + 'using SAC Auto instead.') experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, + workspace=args.workspace, parse_args=False, auto_metric_logging=False, disabled=not args.use_comet) diff --git a/TrackToLearn/trainers/train.py b/TrackToLearn/trainers/train.py new file mode 100644 index 0000000..3baf997 --- /dev/null +++ b/TrackToLearn/trainers/train.py @@ -0,0 +1,417 @@ +import json +import os +import random +from os.path import join as pjoin + +import numpy as np +import torch + +from TrackToLearn.algorithms.rl import RLAlgorithm +from TrackToLearn.algorithms.shared.utils import mean_losses, mean_rewards +from TrackToLearn.environments.env import BaseEnv +from TrackToLearn.experiment.experiment import (add_data_args, + add_environment_args, + add_experiment_args, + add_model_args, + add_oracle_args, + add_reward_args, + add_tracking_args, + add_tractometer_args) +from TrackToLearn.experiment.oracle_validator import OracleValidator +from TrackToLearn.experiment.tractometer_validator import TractometerValidator +from TrackToLearn.experiment.experiment import Experiment +from TrackToLearn.tracking.tracker import Tracker + + +class TrackToLearnTraining(Experiment): + """ + Main RL tracking experiment + """ + + def __init__( + self, + train_dto: dict, + comet_experiment, + ): + """ + Parameters + ---------- + train_dto: dict + Dictionnary containing the training parameters. + Put into a dictionnary to prevent parameter errors if modified. + """ + # TODO: Find a better way to pass parameters around + + # Experiment parameters + self.experiment_path = train_dto['path'] + self.experiment = train_dto['experiment'] + self.name = train_dto['id'] + + # RL parameters + self.max_ep = train_dto['max_ep'] + self.log_interval = train_dto['log_interval'] + self.noise = train_dto['noise'] + + # Training parameters + self.lr = train_dto['lr'] + self.gamma = train_dto['gamma'] + + # Tracking parameters + self.step_size = train_dto['step_size'] + self.dataset_file = train_dto['dataset_file'] + self.rng_seed = train_dto['rng_seed'] + self.npv = train_dto['npv'] + + # Angular thresholds + self.theta = train_dto['theta'] + + # More tracking parameters + self.min_length = train_dto['min_length'] + self.max_length = train_dto['max_length'] + self.binary_stopping_threshold = train_dto['binary_stopping_threshold'] + + # Reward parameters + self.alignment_weighting = train_dto['alignment_weighting'] + + # Model parameters + self.hidden_dims = train_dto['hidden_dims'] + + # Environment parameters + self.n_actor = train_dto['n_actor'] + self.n_dirs = train_dto['n_dirs'] + + # Oracle parameters + self.oracle_checkpoint = train_dto['oracle_checkpoint'] + self.oracle_bonus = train_dto['oracle_bonus'] + self.oracle_validator = train_dto['oracle_validator'] + self.oracle_stopping_criterion = train_dto['oracle_stopping_criterion'] + + # Tractometer parameters + self.tractometer_validator = train_dto['tractometer_validator'] + self.tractometer_dilate = train_dto['tractometer_dilate'] + self.tractometer_reference = train_dto['tractometer_reference'] + self.scoring_data = train_dto['scoring_data'] + + self.compute_reward = True # Always compute reward during training + self.fa_map = None + + # Various parameters + self.comet_experiment = comet_experiment + self.last_episode = 0 + + self.device = torch.device( + "cuda" if torch.cuda.is_available() else "cpu") + + self.use_comet = train_dto['use_comet'] + + # RNG + torch.manual_seed(self.rng_seed) + np.random.seed(self.rng_seed) + self.rng = np.random.RandomState(seed=self.rng_seed) + random.seed(self.rng_seed) + + directory = pjoin(self.experiment_path, 'model') + if not os.path.exists(directory): + os.makedirs(directory) + + self.hyperparameters = { + # RL parameters + # TODO: Make sure all parameters are logged + 'name': self.name, + 'experiment': self.experiment, + 'max_ep': self.max_ep, + 'log_interval': self.log_interval, + 'lr': self.lr, + 'gamma': self.gamma, + # Data parameters + 'step_size': self.step_size, + 'random_seed': self.rng_seed, + 'dataset_file': self.dataset_file, + 'n_seeds_per_voxel': self.npv, + 'max_angle': self.theta, + 'min_length': self.min_length, + 'max_length': self.max_length, + 'binary_stopping_threshold': self.binary_stopping_threshold, + # Model parameters + 'experiment_path': self.experiment_path, + 'hidden_dims': self.hidden_dims, + 'last_episode': self.last_episode, + 'n_actor': self.n_actor, + 'n_dirs': self.n_dirs, + 'noise': self.noise, + # Reward parameters + 'alignment_weighting': self.alignment_weighting, + # Oracle parameters + 'oracle_bonus': self.oracle_bonus, + 'oracle_checkpoint': self.oracle_checkpoint, + 'oracle_stopping_criterion': self.oracle_stopping_criterion, + } + + def save_hyperparameters(self): + """ Save hyperparameters to json file + """ + # Add input and action size to hyperparameters + # These are added here because they are not known before + self.hyperparameters.update({'input_size': self.input_size, + 'action_size': self.action_size, + 'voxel_size': str(self.voxel_size)}) + + directory = pjoin(self.experiment_path, "model") + with open( + pjoin(directory, "hyperparameters.json"), + 'w' + ) as json_file: + json_file.write( + json.dumps( + self.hyperparameters, + indent=4, + separators=(',', ': '))) + + def save_model(self, alg): + """ Save the model state to disk + """ + + directory = pjoin(self.experiment_path, "model") + if not os.path.exists(directory): + os.makedirs(directory) + alg.agent.save(directory, "last_model_state") + + def rl_train( + self, + alg: RLAlgorithm, + env: BaseEnv, + valid_env: BaseEnv, + ): + """ Train the RL algorithm for N epochs. An epoch here corresponds to + running tracking on the training set until all streamlines are done. + This loop should be algorithm-agnostic. Between epochs, report stats + so they can be monitored during training + + Parameters: + ----------- + alg: RLAlgorithm + The RL algorithm, either TD3, PPO or any others + env: BaseEnv + The tracking environment + valid_env: BaseEnv + The validation tracking environment (forward). + """ + + # Current epoch + i_episode = 0 + # Transition counter + t = 0 + + # Initialize Trackers, which will handle streamline generation and + # trainnig + train_tracker = Tracker( + alg, self.n_actor, prob=0.0, compress=0.0) + + valid_tracker = Tracker( + alg, self.n_actor, + prob=1.0, compress=0.0) + + # Setup validators, which will handle validation and scoring + # of the generated streamlines + self.validators = [] + if self.tractometer_validator: + self.validators.append(TractometerValidator( + self.scoring_data, self.tractometer_reference, + dilate_endpoints=self.tractometer_dilate)) + if self.oracle_validator: + self.validators.append(OracleValidator( + self.oracle_checkpoint, self.device)) + + # Run tracking before training to see what an untrained network does + valid_env.load_subject() + valid_tractogram, valid_reward = valid_tracker.track_and_validate( + valid_env) + stopping_stats = self.stopping_stats(valid_tractogram) + print(stopping_stats) + if valid_tractogram: + if self.use_comet: + self.comet_monitor.log_losses(stopping_stats, i_episode) + + filename = self.save_rasmm_tractogram(valid_tractogram, + valid_env.subject_id, + valid_env.affine_vox2rasmm, + valid_env.reference) + scores = self.score_tractogram(filename, valid_env) + print(scores) + + if self.use_comet: + self.comet_monitor.log_losses(scores, i_episode) + self.save_model(alg) + + # Display the results of the untrained network + self.log( + valid_tractogram, valid_reward, i_episode) + + # Main training loop + while i_episode < self.max_ep: + + # Last episode/epoch. Was initially for resuming experiments but + # since they take so little time I just restart them from scratch + # Not sure what to do with this + self.last_episode = i_episode + + # Train for an episode + env.load_subject() + tractogram, losses, reward, reward_factors = \ + train_tracker.track_and_train(env) + + # Compute average streamline length + lengths = [len(s) for s in tractogram] + avg_length = np.mean(lengths) # Nb. of steps + + # Keep track of how many transitions were gathered + t += sum(lengths) + + # Compute average reward per streamline + # Should I use the mean or the sum ? + avg_reward = reward / self.n_actor + + print( + f"Episode Num: {i_episode+1} " + f"Avg len: {avg_length:.3f} Avg. reward: " + f"{avg_reward:.3f} sub: {env.subject_id}") + + # Update monitors + self.train_reward_monitor.update(avg_reward) + self.train_reward_monitor.end_epoch(i_episode) + self.train_length_monitor.update(avg_length) + self.train_length_monitor.end_epoch(i_episode) + + i_episode += 1 + # Update comet logs + if self.use_comet and self.comet_experiment is not None: + mean_ep_reward_factors = mean_rewards(reward_factors) + self.comet_monitor.log_losses( + mean_ep_reward_factors, i_episode) + + self.comet_monitor.update_train( + self.train_reward_monitor, i_episode) + self.comet_monitor.update_train( + self.train_length_monitor, i_episode) + mean_ep_losses = mean_losses(losses) + self.comet_monitor.log_losses(mean_ep_losses, i_episode) + + # Time to do a valid run and display stats + if i_episode % self.log_interval == 0: + # Validation run + valid_env.load_subject() + valid_tractogram, valid_reward = \ + valid_tracker.track_and_validate(valid_env) + stopping_stats = self.stopping_stats(valid_tractogram) + print(stopping_stats) + + if self.use_comet: + self.comet_monitor.log_losses(stopping_stats, i_episode) + filename = self.save_rasmm_tractogram( + valid_tractogram, valid_env.subject_id, + valid_env.affine_vox2rasmm, valid_env.reference) + scores = self.score_tractogram( + filename, valid_env) + print(scores) + + # Display what the network is capable-of "now" + self.log( + valid_tractogram, valid_reward, i_episode) + if self.use_comet: + self.comet_monitor.log_losses(scores, i_episode) + self.save_model(alg) + + # End of training, save the model and hyperparameters and track + valid_env.load_subject() + valid_tractogram, valid_reward = valid_tracker.track_and_validate( + valid_env) + stopping_stats = self.stopping_stats(valid_tractogram) + print(stopping_stats) + + if self.use_comet: + self.comet_monitor.log_losses(stopping_stats, i_episode) + + filename = self.save_rasmm_tractogram(valid_tractogram, + valid_env.subject_id, + valid_env.affine_vox2rasmm, + valid_env.reference) + scores = self.score_tractogram(filename, valid_env) + print(scores) + + # Display what the network is capable-of "now" + self.log( + valid_tractogram, valid_reward, i_episode) + + if self.use_comet: + self.comet_monitor.log_losses(scores, i_episode) + + self.save_model(alg) + + def run(self): + """ Prepare the environment, algorithm and trackers and run the + training loop + """ + + assert torch.cuda.is_available(), \ + "Training is only supported on CUDA devices." + + # Instantiate environment. Actions will be fed to it and new + # states will be returned. The environment updates the streamline + # internally + env = self.get_env() + valid_env = self.get_valid_env() + + # Get example state to define NN input size + self.input_size = env.get_state_size() + self.action_size = env.get_action_size() + + # Voxel size + self.voxel_size = env.get_voxel_size() + + max_traj_length = env.max_nb_steps + + # The RL training algorithm + alg = self.get_alg(max_traj_length) + + # Save hyperparameters + self.save_hyperparameters() + + # Setup monitors to monitor training as it goes along + self.setup_monitors() + + # Setup comet monitors to monitor experiment as it goes along + if self.use_comet: + self.setup_comet() + + # Start training ! + self.rl_train(alg, env, valid_env) + + +def add_rl_args(parser): + # Add RL training arguments. + parser.add_argument('--max_ep', default=1000, type=int, + help='Number of episodes to run the training ' + 'algorithm') + parser.add_argument('--log_interval', default=50, type=int, + help='Log statistics, update comet, save the model ' + 'and hyperparameters at n steps') + parser.add_argument('--lr', default=0.0005, type=float, + help='Learning rate') + parser.add_argument('--gamma', default=0.95, type=float, + help='Gamma param for reward discounting') + + add_reward_args(parser) + + +def add_training_args(parser): + # Add all training arguments here. Less prone to error than + # in every training script. + + add_experiment_args(parser) + add_data_args(parser) + add_environment_args(parser) + add_model_args(parser) + add_rl_args(parser) + add_tracking_args(parser) + add_oracle_args(parser) + add_tractometer_args(parser) diff --git a/TrackToLearn/trainers/trpo_train.py b/TrackToLearn/trainers/trpo_train.py deleted file mode 100644 index bfc48a8..0000000 --- a/TrackToLearn/trainers/trpo_train.py +++ /dev/null @@ -1,149 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter -from comet_ml import Experiment as CometExperiment - -from TrackToLearn.trainers.a2c_train import add_a2c_args -from TrackToLearn.algorithms.trpo import TRPO -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -assert torch.cuda.is_available() - - -class TRPOTrackToLearnTraining(TrackToLearnTraining): - """ - Main RL tracking experiment - """ - - def __init__( - self, - trpo_train_dto: dict, - comet_experiment: CometExperiment, - ): - """ - Parameters - ---------- - trpo_train_dto: dict - TRPO training parameters - comet_experiment: CometExperiment - Allows for logging and experiment management. - """ - - super().__init__( - trpo_train_dto, - comet_experiment, - ) - - # TRPO-specific parameters - self.action_std = trpo_train_dto['action_std'] - self.lmbda = trpo_train_dto['lmbda'] - self.delta = trpo_train_dto['delta'] - self.max_backtracks = trpo_train_dto['max_backtracks'] - self.backtrack_coeff = trpo_train_dto['backtrack_coeff'] - self.K_epochs = trpo_train_dto['K_epochs'] - self.entropy_loss_coeff = trpo_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add TRPO-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'TRPO', - 'action_std': self.action_std, - 'lmbda': self.lmbda, - 'delta': self.delta, - 'max_backtracks': self.max_backtracks, - 'backtrack_coeff': self.backtrack_coeff, - 'K_epochs': self.K_epochs, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self, max_nb_steps: int): - # The RL training algorithm - alg = TRPO( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.lmbda, - self.entropy_loss_coeff, - self.delta, - self.max_backtracks, - self.backtrack_coeff, - self.K_epochs, - max_nb_steps, - self.n_actor, - self.rng, - device) - return alg - - -def add_trpo_args(parser): - parser.add_argument('--max_backtracks', default=10, type=int, - help='Backtracks for conjugate gradient') - parser.add_argument('--delta', default=0.001, type=float, - help='Clipping parameter for TRPO') - parser.add_argument('--backtrack_coeff', default=0.5, type=float, - help='Backtracking coefficient') - parser.add_argument('--K_epochs', default=5, type=int, - help='Train the model for K epochs') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_tracking_args(parser) - - add_a2c_args(parser) - add_trpo_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, - auto_metric_logging=False, - disabled=not args.use_comet) - - # Finally, get experiments, and train your models: - trpo_experiment = TRPOTrackToLearnTraining( - # Dataset params - vars(args), - experiment, - ) - trpo_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/trainers/vpg_train.py b/TrackToLearn/trainers/vpg_train.py deleted file mode 100644 index 1b4c6b1..0000000 --- a/TrackToLearn/trainers/vpg_train.py +++ /dev/null @@ -1,128 +0,0 @@ -#!/usr/bin/env python -import argparse -import comet_ml # noqa: F401 ugh -import torch - -from argparse import RawTextHelpFormatter -from comet_ml import Experiment as CometExperiment - -from TrackToLearn.algorithms.vpg import VPG -from TrackToLearn.experiment.experiment import ( - add_data_args, - add_environment_args, - add_experiment_args, - add_model_args, - add_tracking_args) -from TrackToLearn.experiment.train import ( - add_rl_args, - TrackToLearnTraining) - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - -assert torch.cuda.is_available() - - -class VPGTrackToLearnTraining(TrackToLearnTraining): - """ - Vanilla Policy Gradient experiment. - """ - - def __init__( - self, - vpg_train_dto: dict, - comet_experiment: CometExperiment, - ): - """ - Parameters - ---------- - vpg_train_dto: dict - VPG training parameters - comet_experiment: CometExperiment - Allows for logging and experiment management. - """ - - super().__init__( - vpg_train_dto, - comet_experiment, - ) - - # VPG-specific parameters - self.action_std = vpg_train_dto['action_std'] - self.entropy_loss_coeff = vpg_train_dto['entropy_loss_coeff'] - - def save_hyperparameters(self): - """ Add VPG-specific hyperparameters to self.hyperparameters - then save to file. - """ - - self.hyperparameters.update( - {'algorithm': 'VPG', - 'action_std': self.action_std, - 'entropy_loss_coeff': self.entropy_loss_coeff}) - - super().save_hyperparameters() - - def get_alg(self, max_nb_steps: int): - # The RL training algorithm - alg = VPG( - self.input_size, - self.action_size, - self.hidden_dims, - self.action_std, - self.lr, - self.gamma, - self.entropy_loss_coeff, - max_nb_steps, - self.n_actor, - self.rng, - device) - return alg - - -def add_vpg_args(parser): - parser.add_argument('--entropy_loss_coeff', default=0.0001, type=float, - help='Entropy bonus coefficient') - parser.add_argument('--action_std', default=0.0, type=float, - help='Standard deviation used of the action') - - -def parse_args(): - """ Generate a tractogram from a trained recurrent model. """ - parser = argparse.ArgumentParser( - description=parse_args.__doc__, - formatter_class=RawTextHelpFormatter) - - add_experiment_args(parser) - add_data_args(parser) - - add_environment_args(parser) - add_model_args(parser) - add_rl_args(parser) - add_vpg_args(parser) - add_tracking_args(parser) - - arguments = parser.parse_args() - return arguments - - -def main(): - """ Main tracking script """ - args = parse_args() - print(args) - - experiment = CometExperiment(project_name=args.experiment, - workspace='TrackToLearn', parse_args=False, - auto_metric_logging=False, - disabled=not args.use_comet) - - # Finally, get experiments, and train your models: - vpg_experiment = VPGTrackToLearnTraining( - # Dataset params - vars(args), - experiment, - ) - vpg_experiment.run() - - -if __name__ == '__main__': - main() diff --git a/TrackToLearn/utils/comet_monitor.py b/TrackToLearn/utils/comet_monitor.py index 47db938..5a14fde 100644 --- a/TrackToLearn/utils/comet_monitor.py +++ b/TrackToLearn/utils/comet_monitor.py @@ -1,3 +1,5 @@ +import numpy as np + from os.path import join as pjoin from comet_ml import Experiment @@ -105,18 +107,22 @@ def update( step=i_episode) def log_losses(self, loss_dict, i): - self.e.log_metrics(loss_dict, step=i) + for k, v in loss_dict.items(): + if type(v) is np.ndarray: + self.e.log_histogram_3d(v, name=k, step=i) + else: + self.e.log_metric(k, v, step=i) def update_train( self, - reward_monitor, + monitor, i_episode, ): - reward_x, reward_y = zip(*reward_monitor.epochs) + x, y = zip(*monitor.epochs) self.e.log_metrics( { - self.prefix + "Train Reward": reward_y[-1], + self.prefix + monitor.name: y[-1], }, step=i_episode diff --git a/TrackToLearn/utils/utils.py b/TrackToLearn/utils/utils.py index 05a74fa..271cd83 100644 --- a/TrackToLearn/utils/utils.py +++ b/TrackToLearn/utils/utils.py @@ -1,5 +1,8 @@ +import math import os import sys + +from dipy.core.geometry import sphere2cart from os.path import join as pjoin from time import time @@ -33,8 +36,8 @@ class LossHistory(object): monitor.epochs # returns the loss curve as a list """ - def __init__(self, experiment_id, filename, path): - self.experiment_id = experiment_id + def __init__(self, name, filename, path): + self.name = name self.history = [] self.epochs = [] self.sum = 0.0 @@ -106,5 +109,24 @@ def __exit__(self, type, value, tb): print("{:.2f} sec.".format(time() - self.start)) -def normalize_vectors(v): - return v / np.sqrt(np.sum(v ** 2, axis=-1, keepdims=True)) +def from_sphere(actions, sphere, norm=1.): + vertices = sphere.vertices[actions] + return vertices * norm + + +def normalize_vectors(v, norm=1.): + # v = (v / np.sqrt(np.sum(v ** 2, axis=-1, keepdims=True))) * norm + v = (v / np.sqrt(np.einsum('...i,...i', v, v))[..., None]) * norm + # assert np.all(np.isnan(v) == False), (v, np.argwhere(np.isnan(v))) + return v + + +def from_polar(actions, radius=1.): + + radii = np.ones((actions.shape[0])) * radius + theta = ((actions[..., 0] + 1) / 2.) * (math.pi) + phi = ((actions[..., 1] + 1) / 2.) * (2 * math.pi) + + X, Y, Z = sphere2cart(radii, theta, phi) + cart_directions = np.stack((X, Y, Z), axis=-1) + return cart_directions diff --git a/cc_scripts/a2c_search_exp1_fibercup.sh b/cc_scripts/a2c_search_exp1_fibercup.sh deleted file mode 100755 index a8a8256..0000000 --- a/cc_scripts/a2c_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/a2c_search_exp1_ismrm2015.sh b/cc_scripts/a2c_search_exp1_ismrm2015.sh deleted file mode 100755 index 955fd20..0000000 --- a/cc_scripts/a2c_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/a2c_search_exp2_fibercup.sh b/cc_scripts/a2c_search_exp2_fibercup.sh deleted file mode 100755 index f199bc5..0000000 --- a/cc_scripts/a2c_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/a2c_search_exp2_ismrm2015.sh b/cc_scripts/a2c_search_exp2_ismrm2015.sh deleted file mode 100755 index 88635ef..0000000 --- a/cc_scripts/a2c_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/a2c_search_exp3_fibercup.sh b/cc_scripts/a2c_search_exp3_fibercup.sh deleted file mode 100755 index 674da86..0000000 --- a/cc_scripts/a2c_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/a2c_search_exp3_ismrm2015.sh b/cc_scripts/a2c_search_exp3_ismrm2015.sh deleted file mode 100755 index 7696208..0000000 --- a/cc_scripts/a2c_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=A2C_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/a2c_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp1_fibercup.sh b/cc_scripts/acktr_search_exp1_fibercup.sh deleted file mode 100755 index ae55567..0000000 --- a/cc_scripts/acktr_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp1_ismrm2015.sh b/cc_scripts/acktr_search_exp1_ismrm2015.sh deleted file mode 100755 index d33d63c..0000000 --- a/cc_scripts/acktr_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp2_fibercup.sh b/cc_scripts/acktr_search_exp2_fibercup.sh deleted file mode 100755 index 9e306f0..0000000 --- a/cc_scripts/acktr_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp2_ismrm2015.sh b/cc_scripts/acktr_search_exp2_ismrm2015.sh deleted file mode 100755 index 2ca5f82..0000000 --- a/cc_scripts/acktr_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp3_fibercup.sh b/cc_scripts/acktr_search_exp3_fibercup.sh deleted file mode 100755 index db9b20f..0000000 --- a/cc_scripts/acktr_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/acktr_search_exp3_ismrm2015.sh b/cc_scripts/acktr_search_exp3_ismrm2015.sh deleted file mode 100755 index 4e22cb2..0000000 --- a/cc_scripts/acktr_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=ACKTR_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/acktr_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp1_fibercup.sh b/cc_scripts/ddpg_search_exp1_fibercup.sh deleted file mode 100755 index ce69bbe..0000000 --- a/cc_scripts/ddpg_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp1_ismrm2015.sh b/cc_scripts/ddpg_search_exp1_ismrm2015.sh deleted file mode 100755 index 5da6a6a..0000000 --- a/cc_scripts/ddpg_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp2_fibercup.sh b/cc_scripts/ddpg_search_exp2_fibercup.sh deleted file mode 100755 index 29691ab..0000000 --- a/cc_scripts/ddpg_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp2_ismrm2015.sh b/cc_scripts/ddpg_search_exp2_ismrm2015.sh deleted file mode 100755 index e32c2e6..0000000 --- a/cc_scripts/ddpg_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp3_fibercup.sh b/cc_scripts/ddpg_search_exp3_fibercup.sh deleted file mode 100755 index d3d3a2a..0000000 --- a/cc_scripts/ddpg_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ddpg_search_exp3_ismrm2015.sh b/cc_scripts/ddpg_search_exp3_ismrm2015.sh deleted file mode 100755 index 2359560..0000000 --- a/cc_scripts/ddpg_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=DDPG_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ddpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp1_fibercup.sh b/cc_scripts/ppo_search_exp1_fibercup.sh deleted file mode 100755 index be8b95c..0000000 --- a/cc_scripts/ppo_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp1_ismrm2015.sh b/cc_scripts/ppo_search_exp1_ismrm2015.sh deleted file mode 100755 index 645f11a..0000000 --- a/cc_scripts/ppo_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp2_fibercup.sh b/cc_scripts/ppo_search_exp2_fibercup.sh deleted file mode 100755 index ee9cecd..0000000 --- a/cc_scripts/ppo_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp2_ismrm2015.sh b/cc_scripts/ppo_search_exp2_ismrm2015.sh deleted file mode 100755 index 2986df5..0000000 --- a/cc_scripts/ppo_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp3_fibercup.sh b/cc_scripts/ppo_search_exp3_fibercup.sh deleted file mode 100755 index e03b5d1..0000000 --- a/cc_scripts/ppo_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/ppo_search_exp3_ismrm2015.sh b/cc_scripts/ppo_search_exp3_ismrm2015.sh deleted file mode 100755 index 3cd2404..0000000 --- a/cc_scripts/ppo_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=PPO_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/ppo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_cmc_asym_fibercup.sh b/cc_scripts/sac_auto_search_cmc_asym_fibercup.sh deleted file mode 100755 index f268e14..0000000 --- a/cc_scripts/sac_auto_search_cmc_asym_fibercup.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=~/${LOCAL_TRACK_TO_LEARN_DATA}/ -mkdir -p $WORK_DATASET_FOLDER - -VALIDATION_SUBJECT_ID=fibercup_asym -SUBJECT_ID=fibercup_asym -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -echo "Transfering data to working folder..." -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} -cp -rn $DATASET_FOLDER/datasets/${SUBJECT_ID} $WORK_DATASET_FOLDER/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=500 # Chosen empirically -log_interval=50 # Log at n steps - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SACAutoFiberCupSearch_CmcAsym - -ID=$(date +"%F-%H_%M_%S")_cmc_asym - -seeds=(1111) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/searchers/sac_auto_searcher.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - --scoring_data="${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer \ - --cmc \ - --asymmetric \ - --interface_seeding - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/cc_scripts/sac_auto_search_cmc_asym_ismrm2015.sh b/cc_scripts/sac_auto_search_cmc_asym_ismrm2015.sh deleted file mode 100755 index d9f772c..0000000 --- a/cc_scripts/sac_auto_search_cmc_asym_ismrm2015.sh +++ /dev/null @@ -1,71 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=~/${LOCAL_TRACK_TO_LEARN_DATA}/ -mkdir -p $WORK_DATASET_FOLDER - -VALIDATION_SUBJECT_ID=ismrm2015_asym -SUBJECT_ID=ismrm2015_asym -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -echo "Transfering data to working folder..." -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} -cp -rn $DATASET_FOLDER/datasets/${SUBJECT_ID} $WORK_DATASET_FOLDER/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=500 # Chosen empirically -log_interval=50 # Log at n steps - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SACAutoISMRM2015Search_CmcAsym - -ID=$(date +"%F-%H_%M_%S")_cmc_asym - -seeds=(1111) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/searchers/sac_auto_searcher.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - --scoring_data="${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer \ - --cmc \ - --asymmetric \ - --interface_seeding - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/cc_scripts/sac_auto_search_exp1_fibercup.sh b/cc_scripts/sac_auto_search_exp1_fibercup.sh deleted file mode 100755 index 32f8e79..0000000 --- a/cc_scripts/sac_auto_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp1_ismrm2015.sh b/cc_scripts/sac_auto_search_exp1_ismrm2015.sh deleted file mode 100755 index a2f3492..0000000 --- a/cc_scripts/sac_auto_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp2_fibercup.sh b/cc_scripts/sac_auto_search_exp2_fibercup.sh deleted file mode 100755 index ae16adc..0000000 --- a/cc_scripts/sac_auto_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp2_ismrm2015.sh b/cc_scripts/sac_auto_search_exp2_ismrm2015.sh deleted file mode 100755 index d9e5f87..0000000 --- a/cc_scripts/sac_auto_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp3_fibercup.sh b/cc_scripts/sac_auto_search_exp3_fibercup.sh deleted file mode 100755 index 38f1f4c..0000000 --- a/cc_scripts/sac_auto_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp3_ismrm2015.sh b/cc_scripts/sac_auto_search_exp3_ismrm2015.sh deleted file mode 100755 index 5d58dca..0000000 --- a/cc_scripts/sac_auto_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_0dirs_fibercup.sh b/cc_scripts/sac_auto_search_exp4_0dirs_fibercup.sh deleted file mode 100755 index ad61038..0000000 --- a/cc_scripts/sac_auto_search_exp4_0dirs_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCup0dirsSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --n_dirs=0 \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_0dirs_ismrm2015.sh b/cc_scripts/sac_auto_search_exp4_0dirs_ismrm2015.sh deleted file mode 100755 index 2d815e5..0000000 --- a/cc_scripts/sac_auto_search_exp4_0dirs_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM20150DirsSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --n_dirs=0 \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_2dirs_fibercup.sh b/cc_scripts/sac_auto_search_exp4_2dirs_fibercup.sh deleted file mode 100755 index aabdc35..0000000 --- a/cc_scripts/sac_auto_search_exp4_2dirs_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCup2dirsSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --n_dirs=2 \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_2dirs_ismrm2015.sh b/cc_scripts/sac_auto_search_exp4_2dirs_ismrm2015.sh deleted file mode 100755 index fdb92ec..0000000 --- a/cc_scripts/sac_auto_search_exp4_2dirs_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM20152dirsSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --n_dirs=2 \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_nowm_fibercup.sh b/cc_scripts/sac_auto_search_exp4_nowm_fibercup.sh deleted file mode 100755 index 47d52c8..0000000 --- a/cc_scripts/sac_auto_search_exp4_nowm_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup_nowm -SUBJECT_ID=fibercup_nowm -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupNoWMSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_nowm_ismrm2015.sh b/cc_scripts/sac_auto_search_exp4_nowm_ismrm2015.sh deleted file mode 100755 index b253163..0000000 --- a/cc_scripts/sac_auto_search_exp4_nowm_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015_nowm -SUBJECT_ID=ismrm2015_nowm -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015NoWMSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_raw_fibercup.sh b/cc_scripts/sac_auto_search_exp4_raw_fibercup.sh deleted file mode 100755 index 142eeec..0000000 --- a/cc_scripts/sac_auto_search_exp4_raw_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup_raw -SUBJECT_ID=fibercup_raw -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupRawSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp4_raw_ismrm2015.sh b/cc_scripts/sac_auto_search_exp4_raw_ismrm2015.sh deleted file mode 100755 index bb0f798..0000000 --- a/cc_scripts/sac_auto_search_exp4_raw_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=16000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015_raw -SUBJECT_ID=ismrm2015_raw -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015RawSearchExp4 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp5_len_fibercup.sh b/cc_scripts/sac_auto_search_exp5_len_fibercup.sh deleted file mode 100755 index 7a1120d..0000000 --- a/cc_scripts/sac_auto_search_exp5_len_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupLengthSearchExp5 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher_len.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp5_len_ismrm2015.sh b/cc_scripts/sac_auto_search_exp5_len_ismrm2015.sh deleted file mode 100755 index 75fb024..0000000 --- a/cc_scripts/sac_auto_search_exp5_len_ismrm2015.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015LengthSearchExp5 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher_len.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp5_target_fibercup.sh b/cc_scripts/sac_auto_search_exp5_target_fibercup.sh deleted file mode 100755 index dab8efc..0000000 --- a/cc_scripts/sac_auto_search_exp5_target_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_FiberCupTargetSearchExp5 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher_target.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_search_exp5_target_ismrm2015.sh b/cc_scripts/sac_auto_search_exp5_target_ismrm2015.sh deleted file mode 100755 index 73763e0..0000000 --- a/cc_scripts/sac_auto_search_exp5_target_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_Auto_ISMRM2015TargetSearchExp5 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_auto_searcher_target.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_auto_train_exp5_length_0.01_fibercup.sh b/cc_scripts/sac_auto_train_exp5_length_0.01_fibercup.sh deleted file mode 100755 index 4ecab04..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.01_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.01Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.01 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh deleted file mode 100755 index b3d876c..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.01Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.01 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_length_0.1_fibercup.sh b/cc_scripts/sac_auto_train_exp5_length_0.1_fibercup.sh deleted file mode 100755 index b1537c0..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.1_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh deleted file mode 100755 index 08e911d..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_length_0.5_fibercup.sh b/cc_scripts/sac_auto_train_exp5_length_0.5_fibercup.sh deleted file mode 100755 index 1ae9ae1..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.5_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.5Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh deleted file mode 100755 index 3eabfe0..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.5Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_length_1_fibercup.sh b/cc_scripts/sac_auto_train_exp5_length_1_fibercup.sh deleted file mode 100755 index 92ac008..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_1_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_length_1_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_1_ismrm2015.sh deleted file mode 100755 index e20aec6..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_1_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_length_5_fibercup.sh b/cc_scripts/sac_auto_train_exp5_length_5_fibercup.sh deleted file mode 100755 index 29d2e70..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_5_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength5Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_length_5_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_5_ismrm2015.sh deleted file mode 100755 index f99562f..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_5_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength5Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_length_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_length_ismrm2015.sh deleted file mode 100755 index bc643c4..0000000 --- a/cc_scripts/sac_auto_train_exp5_length_ismrm2015.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER= -WORK_DATASET_FOLDER= - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -r ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -r ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps -lr=0.00005 # Learning rate -gamma=0.85 # Gamma for reward discounting - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -n_actor=50000 -length_weighting=0.1 - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength075 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/runners/sac_auto_train.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --length_weighting=${length_weighting} \ - --use_gpu \ - --use_comet \ - --run_tractometer \ - --n_actor=${n_actor} - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/cc_scripts/sac_auto_train_exp5_target_100_fibercup.sh b/cc_scripts/sac_auto_train_exp5_target_100_fibercup.sh deleted file mode 100755 index 30948ce..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_100_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.95 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget100Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=100 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_target_100_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_target_100_ismrm2015.sh deleted file mode 100755 index adbf2d5..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_100_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.95 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget100Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=100 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_target_10_fibercup.sh b/cc_scripts/sac_auto_train_exp5_target_10_fibercup.sh deleted file mode 100755 index cf35c62..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_10_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.9 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget10Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=10 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_target_10_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_target_10_ismrm2015.sh deleted file mode 100755 index 31e1a1e..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_10_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget10Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=10 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_auto_train_exp5_target_1_fibercup.sh b/cc_scripts/sac_auto_train_exp5_target_1_fibercup.sh deleted file mode 100755 index ca60c1e..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_1_fibercup.sh +++ /dev/null @@ -1,100 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/cc_scripts/sac_auto_train_exp5_target_1_ismrm2015.sh b/cc_scripts/sac_auto_train_exp5_target_1_ismrm2015.sh deleted file mode 100755 index 58a3bc2..0000000 --- a/cc_scripts/sac_auto_train_exp5_target_1_ismrm2015.sh +++ /dev/null @@ -1,98 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gpus-per-node=v100l:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=12000M # memory (per node) -#SBATCH --time=01-00:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL -#SBATCH --array=0-4 # IMPORTANT - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget1Exp5 - -ID=${SLURM_JOB_ID}_${SLURM_ARRAY_TASK_ID} - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=${seeds[${SLURM_ARRAY_TASK_ID}]} - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/runners/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ diff --git a/cc_scripts/sac_search_exp1_fibercup.sh b/cc_scripts/sac_search_exp1_fibercup.sh deleted file mode 100755 index c3b0070..0000000 --- a/cc_scripts/sac_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_search_exp1_ismrm2015.sh b/cc_scripts/sac_search_exp1_ismrm2015.sh deleted file mode 100755 index 7aa683c..0000000 --- a/cc_scripts/sac_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_search_exp2_fibercup.sh b/cc_scripts/sac_search_exp2_fibercup.sh deleted file mode 100755 index ac8e1d4..0000000 --- a/cc_scripts/sac_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_search_exp2_ismrm2015.sh b/cc_scripts/sac_search_exp2_ismrm2015.sh deleted file mode 100755 index c688598..0000000 --- a/cc_scripts/sac_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_search_exp3_fibercup.sh b/cc_scripts/sac_search_exp3_fibercup.sh deleted file mode 100755 index c4b3aa6..0000000 --- a/cc_scripts/sac_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/sac_search_exp3_ismrm2015.sh b/cc_scripts/sac_search_exp3_ismrm2015.sh deleted file mode 100755 index 87bbeb1..0000000 --- a/cc_scripts/sac_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=SAC_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/sac_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp1_fibercup.sh b/cc_scripts/td3_search_exp1_fibercup.sh deleted file mode 100755 index bbef990..0000000 --- a/cc_scripts/td3_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp1_ismrm2015.sh b/cc_scripts/td3_search_exp1_ismrm2015.sh deleted file mode 100755 index 962c88f..0000000 --- a/cc_scripts/td3_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp2_fibercup.sh b/cc_scripts/td3_search_exp2_fibercup.sh deleted file mode 100755 index dbf2608..0000000 --- a/cc_scripts/td3_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp2_ismrm2015.sh b/cc_scripts/td3_search_exp2_ismrm2015.sh deleted file mode 100755 index c5ab44f..0000000 --- a/cc_scripts/td3_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp3_fibercup.sh b/cc_scripts/td3_search_exp3_fibercup.sh deleted file mode 100755 index 8fb12d5..0000000 --- a/cc_scripts/td3_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/td3_search_exp3_ismrm2015.sh b/cc_scripts/td3_search_exp3_ismrm2015.sh deleted file mode 100755 index 5469c05..0000000 --- a/cc_scripts/td3_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TD3_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/td3_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp1_fibercup.sh b/cc_scripts/trpo_search_exp1_fibercup.sh deleted file mode 100755 index 003a658..0000000 --- a/cc_scripts/trpo_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp1_ismrm2015.sh b/cc_scripts/trpo_search_exp1_ismrm2015.sh deleted file mode 100755 index 69cfc5e..0000000 --- a/cc_scripts/trpo_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp2_fibercup.sh b/cc_scripts/trpo_search_exp2_fibercup.sh deleted file mode 100755 index d4d754a..0000000 --- a/cc_scripts/trpo_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp2_ismrm2015.sh b/cc_scripts/trpo_search_exp2_ismrm2015.sh deleted file mode 100755 index 462d98b..0000000 --- a/cc_scripts/trpo_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp3_fibercup.sh b/cc_scripts/trpo_search_exp3_fibercup.sh deleted file mode 100755 index 87b57a1..0000000 --- a/cc_scripts/trpo_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/trpo_search_exp3_ismrm2015.sh b/cc_scripts/trpo_search_exp3_ismrm2015.sh deleted file mode 100755 index e088197..0000000 --- a/cc_scripts/trpo_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=TRPO_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/trpo_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp1_fibercup.sh b/cc_scripts/vpg_search_exp1_fibercup.sh deleted file mode 100755 index c3b8c9c..0000000 --- a/cc_scripts/vpg_search_exp1_fibercup.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_FiberCupSearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp1_ismrm2015.sh b/cc_scripts/vpg_search_exp1_ismrm2015.sh deleted file mode 100755 index 6770d4a..0000000 --- a/cc_scripts/vpg_search_exp1_ismrm2015.sh +++ /dev/null @@ -1,96 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_ISMRM2015SearchExp1 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp2_fibercup.sh b/cc_scripts/vpg_search_exp2_fibercup.sh deleted file mode 100755 index 80df528..0000000 --- a/cc_scripts/vpg_search_exp2_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_FiberCupSearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp2_ismrm2015.sh b/cc_scripts/vpg_search_exp2_ismrm2015.sh deleted file mode 100755 index d2e78fa..0000000 --- a/cc_scripts/vpg_search_exp2_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_ISMRM2015SearchExp2 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp3_fibercup.sh b/cc_scripts/vpg_search_exp3_fibercup.sh deleted file mode 100755 index 3a5fbb0..0000000 --- a/cc_scripts/vpg_search_exp3_fibercup.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_FiberCupSearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/cc_scripts/vpg_search_exp3_ismrm2015.sh b/cc_scripts/vpg_search_exp3_ismrm2015.sh deleted file mode 100755 index 86552ea..0000000 --- a/cc_scripts/vpg_search_exp3_ismrm2015.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/bin/bash - -# Request resources -------------- -# Graham GPU node: 12 cores, 10G ram, 1 GPU -#SBATCH --account $SALLOC_ACCOUNT -#SBATCH --gres=gpu:1 # Number of GPUs (per node) -#SBATCH --cpus-per-task=12 # Number of cores (not cpus) -#SBATCH --mem=10000M # memory (per node) -#SBATCH --time=06-23:00 # time (DD-HH:MM) -#SBATCH --mail-type=BEGIN -#SBATCH --mail-type=END -#SBATCH --mail-type=FAIL -#SBATCH --mail-type=REQUEUE -#SBATCH --mail-type=ALL - -cd /home/$USER/projects/$SALLOC_ACCOUNT/$USER/TractoRL - -module load python/3.8 -pwd -source .env/bin/activate -module load httpproxy -export DISPLAY=:0 - -set -e # exit if any command fails - -# This should point to your dataset folder -HOME=~ -WORK=$SLURM_TMPDIR -DATASET_FOLDER=${HOME}/projects/$SALLOC_ACCOUNT/$USER/tracktolearn -WORK_DATASET_FOLDER=${WORK}/tracktolearn -mkdir -p $WORK_DATASET_FOLDER - -set -e # exit if any command fails - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${WORK_DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -# Move stuff from data folder to working folder -mkdir -p $WORK_DATASET_FOLDER/datasets - -echo "Transfering data to working folder..." -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -# Data params -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n steps - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature -# n_dirs=0 - -EXPERIMENT=VPG_ISMRM2015SearchExp3 - -ID=$(date +"%F-%H_%M_%S") - -rng_seed=1111 - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/searchers/vpg_searcher.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --prob=$prob \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/"$rng_seed" -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" diff --git a/example_models/SAC_Auto_ISMRM2015_WM/hyperparameters.json b/example_models/SAC_Auto_ISMRM2015_WM/hyperparameters.json deleted file mode 100644 index 46f841c..0000000 --- a/example_models/SAC_Auto_ISMRM2015_WM/hyperparameters.json +++ /dev/null @@ -1,39 +0,0 @@ -{ - "id": "2023-02-07-11_16_14", - "experiment": "SAC_Auto_ISMRM2015TrainExp1", - "max_ep": 1000, - "log_interval": 50, - "lr": 0.0001, - "gamma": 0.5, - "add_neighborhood": 0.75, - "step_size": 0.75, - "random_seed": 1111, - "dataset_file": "/home/local/USHERBROOKE/thea1603/local_braindata/processedData/atheb/tracktolearn//raw/ismrm2015/ismrm2015.hdf5", - "subject_id": "ismrm2015", - "n_seeds_per_voxel": 2, - "max_angle": 30, - "min_length": 20, - "max_length": 200, - "cmc": false, - "asymmetric": false, - "experiment_path": "/home/local/USHERBROOKE/thea1603/local_braindata/processedData/atheb/tracktolearn//experiments/SAC_Auto_ISMRM2015TrainExp1/2023-02-07-11_16_14/1111", - "use_gpu": true, - "hidden_dims": "1024-1024", - "last_episode": 0, - "n_actor": 4096, - "n_signal": 1, - "n_dirs": 4, - "interface_seeding": false, - "no_retrack": false, - "alignment_weighting": 1, - "straightness_weighting": 0, - "length_weighting": 0, - "target_bonus_factor": 0, - "exclude_penalty_factor": 0, - "angle_penalty_factor": 0, - "algorithm": "SACAuto", - "alpha": 0.2, - "input_size": 215, - "action_size": 3, - "voxel_size": "2.0" -} diff --git a/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_actor.pth b/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_actor.pth deleted file mode 100644 index 7cfb06c..0000000 Binary files a/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_actor.pth and /dev/null differ diff --git a/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_critic.pth b/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_critic.pth deleted file mode 100644 index d9dcd88..0000000 Binary files a/example_models/SAC_Auto_ISMRM2015_WM/last_model_state_critic.pth and /dev/null differ diff --git a/example_models/SAC_Auto_ISMRM2015_interface/hyperparameters.json b/example_models/SAC_Auto_ISMRM2015_interface/hyperparameters.json deleted file mode 100644 index 82cc99b..0000000 --- a/example_models/SAC_Auto_ISMRM2015_interface/hyperparameters.json +++ /dev/null @@ -1,39 +0,0 @@ -{ - "id": "2023-02-21-17_27_47", - "experiment": "SAC_Auto_ISMRM2015TrainExp2", - "max_ep": 1000, - "log_interval": 50, - "lr": 5e-05, - "gamma": 0.75, - "add_neighborhood": 0.75, - "step_size": 0.75, - "random_seed": 1111, - "dataset_file": "/home/local/USHERBROOKE/thea1603/local_braindata/processedData/atheb/tracktolearn//raw/ismrm2015/ismrm2015.hdf5", - "subject_id": "ismrm2015", - "n_seeds_per_voxel": 20, - "max_angle": 30, - "min_length": 20, - "max_length": 200, - "cmc": false, - "asymmetric": false, - "experiment_path": "/home/local/USHERBROOKE/thea1603/local_braindata/processedData/atheb/tracktolearn//experiments/SAC_Auto_ISMRM2015TrainExp2/2023-02-21-17_27_47/1111", - "use_gpu": true, - "hidden_dims": "1024-1024", - "last_episode": 0, - "n_actor": 4096, - "n_signal": 1, - "n_dirs": 4, - "interface_seeding": true, - "no_retrack": false, - "alignment_weighting": 1, - "straightness_weighting": 0, - "length_weighting": 0, - "target_bonus_factor": 0, - "exclude_penalty_factor": 0, - "angle_penalty_factor": 0, - "algorithm": "SACAuto", - "alpha": 0.2, - "input_size": 215, - "action_size": 3, - "voxel_size": "2.0" -} diff --git a/example_models/SAC_Auto_ISMRM2015_interface/last_model_state_actor.pth b/example_models/SAC_Auto_ISMRM2015_interface/last_model_state_actor.pth deleted file mode 100644 index ddb0f9b..0000000 Binary files a/example_models/SAC_Auto_ISMRM2015_interface/last_model_state_actor.pth and /dev/null differ diff --git a/extra-requirements.txt b/extra-requirements.txt deleted file mode 100644 index de44f25..0000000 --- a/extra-requirements.txt +++ /dev/null @@ -1 +0,0 @@ -git+https://github.com/scilus/ismrm_2015_tractography_challenge_scoring.git diff --git a/install.sh b/install.sh new file mode 100755 index 0000000..944e9b0 --- /dev/null +++ b/install.sh @@ -0,0 +1,44 @@ +# Install required packages +# Print OS information compatible with Linux, macOS and Windows +echo "Platform:" $(uname ) +# Print python version +echo "Python version: $(python --version)" +# If platform has CUDA installed +if [ -x "$(command -v nvidia-smi)" ]; then + # Print GPU name + echo "Found GPU: $(nvidia-smi --query-gpu=name --format=csv,noheader)" + # Print CUDA version from grepping nvidia-smi + echo "Found CUDA version: $(nvidia-smi | grep "CUDA Version" | awk '{print $9}')" + # Check CUDA version and format as cuXXX + FOUND_CUDA=$(nvidia-smi | grep "CUDA Version" | awk '{print $9}' | sed 's/\.//g') + if (( $FOUND_CUDA == 116 )); then + CUDA_VERSION="cu116" + elif (( $FOUND_CUDA >= 117 )); then + CUDA_VERSION="cu117" + else + CUDA_VERSION="cpu" + echo "CUDA version ${FOUND_CUDA} is not compatible. Installing PyTorch without CUDA support." + fi +else + echo "No GPU or CUDA installation found. Installing PyTorch without CUDA support." + CUDA_VERSION="cpu" +fi + +echo "Installing required packages." + +pip install Cython==0.29.* numpy==1.23.* packaging --quiet + +if [[ "$OSTYPE" == "darwin"* ]]; then + echo "Installing PyTorch 1.13.1" + pip install torch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 --quiet +else + # Install pytorch + echo "Installing PyTorch 1.13.1+${CUDA_VERSION}" + # Install PyTorch with CUDA support + pip install torch==1.13.1+${CUDA_VERSION} torchvision==0.14.1+${CUDA_VERSION} torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/${CUDA_VERSION} --quiet +fi + +# Install other required packages and modules +echo "Finalizing installation ..." +pip install -e . --quiet +echo "Done !" diff --git a/models/hyperparameters.json b/models/hyperparameters.json new file mode 100644 index 0000000..e978900 --- /dev/null +++ b/models/hyperparameters.json @@ -0,0 +1,50 @@ +{ + "name": "_2024-02-14-11_43_59", + "experiment": "SAC_Auto_InfernoTrainOracle", + "max_ep": 1000, + "log_interval": 50, + "lr": 0.0005, + "gamma": 0.95, + "add_neighborhood": 0.75, + "step_size": 0.75, + "random_seed": 1111, + "n_seeds_per_voxel": 2, + "max_angle": 30, + "max_angular_error": 90, + "min_length": 20.0, + "max_length": 200.0, + "binary_stopping_threshold": 0.1, + "cmc": false, + "asymmetric": false, + "sphere": null, + "action_type": "cartesian", + "hidden_dims": "1024-1024-1024", + "last_episode": 0, + "n_actor": 4096, + "n_signal": 1, + "n_dirs": 100, + "interface_seeding": true, + "no_retrack": false, + "prob": 1.0, + "noise": 0.0, + "alignment_weighting": 1.0, + "straightness_weighting": 0, + "length_weighting": 0, + "target_bonus_factor": 0, + "exclude_penalty_factor": 0, + "angle_penalty_factor": 0, + "coverage_weighting": 0.0, + "dense_oracle_bonus": 0, + "sparse_oracle_bonus": 10.0, + "oracle_checkpoint": "epoch_10_inferno.ckpt", + "oracle_stopping_criterion": true, + "oracle_filter": false, + "tractometer_weighting": 0, + "algorithm": "SACAuto", + "alpha": 0.2, + "batch_size": 4096, + "replay_size": 1000000.0, + "input_size": 615, + "action_size": 3, + "voxel_size": "0.9987237" +} diff --git a/example_models/SAC_Auto_ISMRM2015_interface/last_model_state_critic.pth b/models/last_model_state_actor.pth similarity index 54% rename from example_models/SAC_Auto_ISMRM2015_interface/last_model_state_critic.pth rename to models/last_model_state_actor.pth index 1716d3d..89add37 100644 Binary files a/example_models/SAC_Auto_ISMRM2015_interface/last_model_state_critic.pth and b/models/last_model_state_actor.pth differ diff --git a/models/last_model_state_critic.pth b/models/last_model_state_critic.pth new file mode 100644 index 0000000..d5d8c24 Binary files /dev/null and b/models/last_model_state_critic.pth differ diff --git a/models/tractoracle.ckpt b/models/tractoracle.ckpt new file mode 100644 index 0000000..2802391 Binary files /dev/null and b/models/tractoracle.ckpt differ diff --git a/requirements.txt b/requirements.txt index cbb7eb5..0b6899f 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,110 +1,7 @@ -apipkg -appdirs -attrs -autopep8 -backcall -bids-validator -certifi -chardet -comet-git-pure -comet-ml -configobj -coverage -cycler -Cython -decorator -dipy -docopt -entrypoints -everett -execnet -flake8 -fury -future -gymnasium -h5py -idna -imagecodecs -imageio -importlib-metadata -ipython -ipython-genutils -jedi -joblib -jsonpatch -jsonpointer -jsonschema -kiwisolver -llvmlite -matplotlib -mccabe -more-itertools -netifaces -networkx -nibabel -noise -num2words -numba -numpy -nvidia-ml-py3 -packaging -pandas -parso -patsy -pep8 -pexpect -pickleshare -Pillow -pluggy -pooch -prompt-toolkit -ptyprocess -py -pybids -pycodestyle -pydot -pyflakes -pygame -Pygments -pyparsing -pyrsistent -pytest -python-dateutil -pytz -PyWavelets -PyYAML -pyzmq -requests -scikit-image -scikit-learn -scipy -six -SQLAlchemy -tbb -threadpoolctl -tifffile -tornado -tqdm -traitlets -typing-extensions -umap-learn -urllib3 -vtk -wcwidth -websocket-client -wurlitzer -zipp +comet-ml==3.21.0 +h5py==3.7.0 +nibabel==4.0.2 +tqdm==4.64.1 -# torch requirements. change according to your version of CUDA -# Replace `cu111` for `cpu` for CPU usage or `cuXXX` to fit your -# local CUDA setup (101, 102, 112, etc..) - -# Since CUDA is not supported on MacOS, the default "CPU" version of -# torch is installed instead (notice the sys_platform condition") - --f https://download.pytorch.org/whl/torch_stable.html -torch==1.9.1+cu111;sys_platform!="darwin" -torch==1.9.1;sys_platform=="darwin" -torchvision==0.10.1+cu111;sys_platform!="darwin" -torchvision==0.10.1;sys_platform=="darwin" -torchaudio==0.9.1 +dwi-ml @ git+https://github.com/scil-vital/dwi_ml@b7ac03a5b4f2cab77bbcf000a763afa133786b04 +scilpy @ git+https://github.com/scilus/scilpy@1.5.0 diff --git a/scripts/a2c_train_exp1_fibercup.sh b/scripts/a2c_train_exp1_fibercup.sh deleted file mode 100755 index a8a3429..0000000 --- a/scripts/a2c_train_exp1_fibercup.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=1.0e-5 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/a2c_train_exp1_ismrm2015.sh b/scripts/a2c_train_exp1_ismrm2015.sh deleted file mode 100755 index 27fc144..0000000 --- a/scripts/a2c_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.95 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/a2c_train_exp2_fibercup.sh b/scripts/a2c_train_exp2_fibercup.sh deleted file mode 100755 index 7e8e7a4..0000000 --- a/scripts/a2c_train_exp2_fibercup.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/a2c_train_exp2_ismrm2015.sh b/scripts/a2c_train_exp2_ismrm2015.sh deleted file mode 100755 index 7933895..0000000 --- a/scripts/a2c_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/a2c_train_exp3_fibercup.sh b/scripts/a2c_train_exp3_fibercup.sh deleted file mode 100755 index 05e6c8c..0000000 --- a/scripts/a2c_train_exp3_fibercup.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/a2c_train_exp3_ismrm2015.sh b/scripts/a2c_train_exp3_ismrm2015.sh deleted file mode 100755 index 8d0e64b..0000000 --- a/scripts/a2c_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,81 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.95 # Gamma for reward discounting -action_std=0.0 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=A2C_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/a2c_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp1_fibercup.sh b/scripts/acktr_train_exp1_fibercup.sh deleted file mode 100755 index 548b97f..0000000 --- a/scripts/acktr_train_exp1_fibercup.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.001 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp1_ismrm2015.sh b/scripts/acktr_train_exp1_ismrm2015.sh deleted file mode 100755 index c310da4..0000000 --- a/scripts/acktr_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,82 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp2_fibercup.sh b/scripts/acktr_train_exp2_fibercup.sh deleted file mode 100755 index 62e5b51..0000000 --- a/scripts/acktr_train_exp2_fibercup.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.9 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp2_ismrm2015.sh b/scripts/acktr_train_exp2_ismrm2015.sh deleted file mode 100755 index 5941d2f..0000000 --- a/scripts/acktr_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.85 # Gamma for reward discounting -action_std=0.0 -delta=0.001 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp3_fibercup.sh b/scripts/acktr_train_exp3_fibercup.sh deleted file mode 100755 index 99e3e13..0000000 --- a/scripts/acktr_train_exp3_fibercup.sh +++ /dev/null @@ -1,83 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/acktr_train_exp3_ismrm2015.sh b/scripts/acktr_train_exp3_ismrm2015.sh deleted file mode 100755 index c8fb28c..0000000 --- a/scripts/acktr_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.01 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.005 -lmbda=0.95 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=ACKTR_ISMRM2015TrainExp3 - -# ID=$(date +"%F-%H_%M_%S") -ID=$1 - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$2 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/acktr_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/create_hdf5.sh b/scripts/create_hdf5.sh deleted file mode 100755 index 999cd56..0000000 --- a/scripts/create_hdf5.sh +++ /dev/null @@ -1,19 +0,0 @@ -# Experiment name -# Example command: ./scripts/rl_experiment.sh TD3Experiment - -set -e # exit if any command fails - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -SUBJECT_ID=fibercup - -# Create dataset -TrackToLearn/datasets/create_hdf5.py \ - $DATASET_FOLDER \ - ${SUBJECT_ID} \ - ${DATASET_FOLDER}/datasets/${SUBJECT_ID} \ - --name=${SUBJECT_ID}_mask \ - --fodfs \ - --add_masks diff --git a/scripts/ddpg_train_exp1_fibercup.sh b/scripts/ddpg_train_exp1_fibercup.sh deleted file mode 100755 index d397093..0000000 --- a/scripts/ddpg_train_exp1_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.95 # Gamma for reward discounting -action_std=0.35 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ddpg_train_exp1_ismrm2015.sh b/scripts/ddpg_train_exp1_ismrm2015.sh deleted file mode 100755 index 124d6b5..0000000 --- a/scripts/ddpg_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.35 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ddpg_train_exp2_fibercup.sh b/scripts/ddpg_train_exp2_fibercup.sh deleted file mode 100755 index a74fe3d..0000000 --- a/scripts/ddpg_train_exp2_fibercup.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ddpg_train_exp2_ismrm2015.sh b/scripts/ddpg_train_exp2_ismrm2015.sh deleted file mode 100755 index 4f5b232..0000000 --- a/scripts/ddpg_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ddpg_train_exp3_fibercup.sh b/scripts/ddpg_train_exp3_fibercup.sh deleted file mode 100755 index 3b08040..0000000 --- a/scripts/ddpg_train_exp3_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ddpg_train_exp3_ismrm2015.sh b/scripts/ddpg_train_exp3_ismrm2015.sh deleted file mode 100755 index 7fc263c..0000000 --- a/scripts/ddpg_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=DDPG_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ddpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ppo_train_exp1_fibercup.sh b/scripts/ppo_train_exp1_fibercup.sh deleted file mode 100755 index 05b139b..0000000 --- a/scripts/ppo_train_exp1_fibercup.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=5.0e-5 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.05 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ppo_train_exp1_ismrm2015.sh b/scripts/ppo_train_exp1_ismrm2015.sh deleted file mode 100755 index f92dce0..0000000 --- a/scripts/ppo_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.2 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ppo_train_exp2_fibercup.sh b/scripts/ppo_train_exp2_fibercup.sh deleted file mode 100755 index 5deb16f..0000000 --- a/scripts/ppo_train_exp2_fibercup.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.1 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_FiberCupTrainExp2 - -ID=2023-03-01-16_02_18 - -seeds=(1111 2222 3333 4444 5555) -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/ppo_train_exp2_ismrm2015.sh b/scripts/ppo_train_exp2_ismrm2015.sh deleted file mode 100755 index 9aa20bd..0000000 --- a/scripts/ppo_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,86 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.1 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_ISMRM2015TrainExp2 - -ID=2023-03-06-12_22_46 - -seeds=(1111 2222 3333 4444 5555) -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/ppo_train_exp3_fibercup.sh b/scripts/ppo_train_exp3_fibercup.sh deleted file mode 100755 index 36f7899..0000000 --- a/scripts/ppo_train_exp3_fibercup.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.85 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.05 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/ppo_train_exp3_ismrm2015.sh b/scripts/ppo_train_exp3_ismrm2015.sh deleted file mode 100755 index dc54992..0000000 --- a/scripts/ppo_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -eps_clip=0.05 -lmbda=0.95 -K_epochs=30 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=PPO_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/ppo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --eps_clip=${eps_clip} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/reward_pft.sh b/scripts/reward_pft.sh deleted file mode 100755 index 9efffcd..0000000 --- a/scripts/reward_pft.sh +++ /dev/null @@ -1,58 +0,0 @@ -seeds=(1111 2222 3333 4444 5555) - -STEP=0.75 -DATASET=fibercup -EXPERIMENT=PFT_FiberCup075 -NPV=33 -# -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/reward_tractogram.sh $OUT_FOLDER/$OUT ${DATASET} - -done - -EXPERIMENT=PFT_FiberCupGM075 -NPV=33 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/reward_tractogram.sh $OUT_FOLDER/$OUT ${DATASET} - -done - -DATASET=ismrm2015 -EXPERIMENT=PFT_ISMRM2015075 -BASE=${TRACK_TO_LEARN_DATA}/datasets/$DATASET -NPV=7 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/reward_tractogram.sh $OUT_FOLDER/$OUT ${DATASET} - -done - -DATASET=ismrm2015 -EXPERIMENT=PFT_ISMRM2015GM075 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/reward_tractogram.sh $OUT_FOLDER/$OUT ${DATASET} - -done - diff --git a/scripts/reward_tractogram.py b/scripts/reward_tractogram.py deleted file mode 100755 index 5e9c05d..0000000 --- a/scripts/reward_tractogram.py +++ /dev/null @@ -1,151 +0,0 @@ -import argparse -import numpy as np -import torch - -from dipy.io.streamline import load_tractogram -from tqdm import tqdm - -from TrackToLearn.environments.reward import Reward -from TrackToLearn.environments.tracker import Tracker - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -def compute_reward(tractogram_file, reward_function): - - def reward_streamline(reward_function, streamlines): - reward = 0 - lens = np.asarray([len(s) for s in streamlines]) - max_lens = max(lens) - not_dones = np.asarray([True] * len(lens)) - for i in tqdm(range(2, max_lens)): - streamlines = streamlines[not_dones] - new_dones = np.asarray([i] * len(lens)) == (np.asarray(lens) - 1) - # TODO: Verif how reward is calculated - # TODO: actually retrack streamlines ? - reward += np.sum(reward_function(np.asarray( - [s[:i] for s in streamlines]), ~not_dones)) - lens = lens[~new_dones] - not_dones = ~new_dones - - return reward - - tractogram = load_tractogram( - tractogram_file, 'same', bbox_valid_check=False, - trk_header_check=False) - tractogram.to_vox() - reward = reward_streamline( - reward_function, tractogram.streamlines) / len(tractogram.streamlines) - - return np.sum(reward) - - -def get_reward_function( - env_dto: dict, - device -): - - env_dto['device'] = device - env_dto['compute_reward'] = True - env_dto['rng'] = np.random.RandomState(env_dto['rng_seed']) - - # Forward environment - env = Tracker.from_dataset( - env_dto, - 'training') - - reward_function = Reward( - peaks=env.peaks, - exclude=env.exclude_mask, - target=env.target_mask, - max_nb_steps=env.max_nb_steps, - theta=env.theta, - min_nb_steps=env.min_nb_steps, - asymmetric=env.asymmetric, - alignment_weighting=env.alignment_weighting, - straightness_weighting=env.straightness_weighting, - length_weighting=env.length_weighting, - target_bonus_factor=env.target_bonus_factor, - exclude_penalty_factor=env.exclude_penalty_factor, - angle_penalty_factor=env.angle_penalty_factor, - affine_vox2mask=env.affine_vox2mask, - scoring_data=None, # TODO: Add scoring back - reference=env.reference) - - return reward_function - - -def buildArgsParser(): - p = argparse.ArgumentParser(description="", - formatter_class=argparse.RawTextHelpFormatter) - - p.add_argument('tractograms', metavar='TRACTS', type=str, nargs='+', - help='Tractogram file') - p.add_argument('dataset_file', - help='Path to preprocessed dataset file (.hdf5)') - p.add_argument('subject_id', - help='Subject id to fetch from the dataset file') - p.add_argument('--n_signal', default=1, type=int, - help='Signal at the last n positions') - p.add_argument('--n_dirs', default=4, type=int, - help='Last n steps taken') - p.add_argument('--add_neighborhood', default=0.75, type=float, - help='Add neighborhood to model input') - p.add_argument('--npv', default=2, type=int, - help='Number of random seeds per seeding mask voxel') - p.add_argument('--theta', default=30, type=int, - help='Max angle for tracking') - p.add_argument('--min_length', default=20, type=int, - help='Minimum length for tracts') - p.add_argument('--max_length', default=200, type=int, - help='Maximum length for tracts') - p.add_argument('--alignment_weighting', default=1, type=float, - help='Alignment weighting for reward') - p.add_argument('--straightness_weighting', default=0, type=float, - help='Straightness weighting for reward') - p.add_argument('--length_weighting', default=0, type=float, - help='Length weighting for reward') - p.add_argument('--target_bonus_factor', default=0, type=float, - help='Bonus for streamlines reaching the target mask') - p.add_argument('--exclude_penalty_factor', default=0, type=float, - help='Penalty for streamlines reaching the exclusion ' - 'mask') - p.add_argument('--angle_penalty_factor', default=0, type=float, - help='Penalty for looping or too-curvy streamlines') - p.add_argument('--step_size', default=0.75, type=float, - help='Step size for tracking') - p.add_argument('--interface_seeding', action='store_true', - help='If set, don\'t track "backwards"') - p.add_argument('--no_retrack', action='store_true', - help='If set, don\'t retrack backwards') - p.add_argument('--cmc', action='store_true', - help='If set, use Continuous Mask Criteria to stop' - 'tracking.') - p.add_argument('--asymmetric', action='store_true', - help='If set, presume asymmetric fODFs when ' - 'computing reward.') - p.add_argument('--rng_seed', default=1337, type=int, - help='Seed to fix general randomness') - - return p - - -def main(): - p = buildArgsParser() - args = p.parse_args() - tractograms = args.tractograms - reward_function = get_reward_function( - vars(args), device) - - returns = [] - for tractogram in tractograms: - print('Computing return for', tractogram) - returns.append(compute_reward(tractogram, reward_function)) - print(returns[-1]) - - print(np.mean(returns), np.std(returns)) - - -if __name__ == '__main__': - main() diff --git a/scripts/reward_tractogram.sh b/scripts/reward_tractogram.sh deleted file mode 100755 index 79d2562..0000000 --- a/scripts/reward_tractogram.sh +++ /dev/null @@ -1,43 +0,0 @@ -SUBJECT_ID=${@: -1} -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - -# Env parameters -npv=2 # Seed per voxel -add_neighborhood=0.75 # Neighborhood to add to state input -theta=30 # Maximum angle for streamline curvature -min_length=20 # Minimum streamline length -max_length=200 # Maximum streamline length -step_size=0.75 # Step size (in mm) -n_signal=1 # Use last n input -n_dirs=4 # Also input last n directions taken - -# Reward params -alignment_weighting=1. # Reward weighting for alignment -straightness_weighting=0. # Reward weighting for sinuosity -length_weighting=0.0 # Reward weighting for length -target_bonus_factor=0.0 # Reward penalty/bonus for end-of-trajectory actions -exclude_penalty_factor=0.0 # Reward penalty/bonus for end-of-trajectory actions -angle_penalty_factor=0.0 # Reward penalty/bonus for end-of-trajectory actions - -rng_seed=1111 - -python ./scripts/reward_tractogram.py ${@:1:$#-1} \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --min_length=${min_length} \ - --max_length=${max_length} \ - --step_size=${step_size} \ - --add_neighborhood=${add_neighborhood} \ - --alignment_weighting=${alignment_weighting} \ - --straightness_weighting=${straightness_weighting} \ - --length_weighting=${length_weighting} \ - --target_bonus_factor=${target_bonus_factor} \ - --exclude_penalty_factor=${exclude_penalty_factor} \ - --angle_penalty_factor=${angle_penalty_factor} \ - --n_signal=${n_signal} \ - --n_dirs=${n_dirs} - diff --git a/scripts/rl_test.sh b/scripts/rl_test.sh deleted file mode 100755 index d40ca76..0000000 --- a/scripts/rl_test.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=33 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1 0.2 0.3) -subjectids=(fibercup) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv}_wtf - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup.sh b/scripts/rl_test_fibercup.sh deleted file mode 100755 index 87b66af..0000000 --- a/scripts/rl_test_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env bash - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=33 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(fibercup fibercup_flipped) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup_cmc_asym.sh b/scripts/rl_test_fibercup_cmc_asym.sh deleted file mode 100755 index 67105ab..0000000 --- a/scripts/rl_test_fibercup_cmc_asym.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=300 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(fibercup_asym) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --use_gpu \ - --cmc \ - --asym \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup_exp2.sh b/scripts/rl_test_fibercup_exp2.sh deleted file mode 100755 index 1f208a1..0000000 --- a/scripts/rl_test_fibercup_exp2.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=300 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1 0.2) -subjectids=(fibercup fibercup_flipped) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --interface_seeding \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/test_scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup_exp3.sh b/scripts/rl_test_fibercup_exp3.sh deleted file mode 100755 index f580001..0000000 --- a/scripts/rl_test_fibercup_exp3.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=33 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(fibercup fibercup_flipped) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --no_retrack \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup_nowm.sh b/scripts/rl_test_fibercup_nowm.sh deleted file mode 100755 index b8e37ed..0000000 --- a/scripts/rl_test_fibercup_nowm.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env bash - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=33 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(fibercup_nowm fibercup_flipped_nowm) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_fibercup_raw.sh b/scripts/rl_test_fibercup_raw.sh deleted file mode 100755 index 3e01922..0000000 --- a/scripts/rl_test_fibercup_raw.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/usr/bin/env bash - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=33 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(fibercup_raw fibercup_flipped_raw) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --compute_ic_ib \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_hcp.sh b/scripts/rl_test_hcp.sh deleted file mode 100755 index fdd9112..0000000 --- a/scripts/rl_test_hcp.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=2 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1 0.2) -subjectids=(hcp_100206) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - scil_recognize_multi_bundles.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - $SCORING_DATA/config/config_ind.json \ - $SCORING_DATA/atlas/*/ \ - $SCORING_DATA/output0GenericAffine.mat \ - --out $validation_folder/voting_results \ - -f --log_level DEBUG --multi_parameters 27 \ - --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - --processes 4 --seeds 0 - done - done -done diff --git a/scripts/rl_test_hcp_exp2.sh b/scripts/rl_test_hcp_exp2.sh deleted file mode 100755 index de319b6..0000000 --- a/scripts/rl_test_hcp_exp2.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=10 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1 0.2) -subjectids=(hcp_100206) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --interface_seeding \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - scil_recognize_multi_bundles.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - $SCORING_DATA/config/config_ind.json \ - $SCORING_DATA/atlas/*/ \ - $SCORING_DATA/output0GenericAffine.mat \ - --out $validation_folder/voting_results \ - -f --log_level DEBUG --multi_parameters 27 \ - --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - --processes 4 --seeds 0 - done - done -done diff --git a/scripts/rl_test_hcp_exp3.sh b/scripts/rl_test_hcp_exp3.sh deleted file mode 100755 index 335003b..0000000 --- a/scripts/rl_test_hcp_exp3.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=2 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(hcp_100206) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --no_retrack \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - scil_recognize_multi_bundles.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - $SCORING_DATA/config/config_ind.json \ - $SCORING_DATA/atlas/*/ \ - $SCORING_DATA/output0GenericAffine.mat \ - --out $validation_folder/voting_results \ - -f --log_level DEBUG --multi_parameters 27 \ - --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - --processes 4 --seeds 0 - done - done -done diff --git a/scripts/rl_test_hcp_nowm.sh b/scripts/rl_test_hcp_nowm.sh deleted file mode 100755 index 1ce8b4e..0000000 --- a/scripts/rl_test_hcp_nowm.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=2 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(hcp_100206_nowm) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - scil_recognize_multi_bundles.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - $SCORING_DATA/config/config_ind.json \ - $SCORING_DATA/atlas/*/ \ - $SCORING_DATA/output0GenericAffine.mat \ - --out $validation_folder/voting_results \ - -f --log_level DEBUG --multi_parameters 27 \ - --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - --processes 4 --seeds 0 - done - done -done diff --git a/scripts/rl_test_hcp_raw.sh b/scripts/rl_test_hcp_raw.sh deleted file mode 100755 index 573a74c..0000000 --- a/scripts/rl_test_hcp_raw.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=2 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(hcp_100206_raw) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - scil_recognize_multi_bundles.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - $SCORING_DATA/config/config_ind.json \ - $SCORING_DATA/atlas/*/ \ - $SCORING_DATA/output0GenericAffine.mat \ - --out $validation_folder/voting_results \ - -f --log_level DEBUG --multi_parameters 27 \ - --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - --processes 4 --seeds 0 - done - done -done diff --git a/scripts/rl_test_ismrm2015.sh b/scripts/rl_test_ismrm2015.sh deleted file mode 100755 index 740b061..0000000 --- a/scripts/rl_test_ismrm2015.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=7 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(ismrm2015) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_ismrm2015_exp2.sh b/scripts/rl_test_ismrm2015_exp2.sh deleted file mode 100755 index 1ba6e37..0000000 --- a/scripts/rl_test_ismrm2015_exp2.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=60 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 -SEED=$3 - -validstds=(0.0 0.1 0.2) -subjectids=(ismrm2015) -seeds=(1111 2222 3333 4444 5555) - -# for SEED in "${seeds[@]}" -# do -for SUBJECT_ID in "${subjectids[@]}" -do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --interface_seeding \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done -done -# done diff --git a/scripts/rl_test_ismrm2015_exp3.sh b/scripts/rl_test_ismrm2015_exp3.sh deleted file mode 100755 index 43e1669..0000000 --- a/scripts/rl_test_ismrm2015_exp3.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=7 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(ismrm2015) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --no_retrack \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_ismrm2015_nowm.sh b/scripts/rl_test_ismrm2015_nowm.sh deleted file mode 100755 index ed1b9fe..0000000 --- a/scripts/rl_test_ismrm2015_nowm.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=7 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(ismrm2015_nowm) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_ismrm2015_raw.sh b/scripts/rl_test_ismrm2015_raw.sh deleted file mode 100755 index ef8ed2b..0000000 --- a/scripts/rl_test_ismrm2015_raw.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=10000 -npv=7 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -validstds=(0.0 0.1) -subjectids=(ismrm2015_raw) -seeds=(1111 2222 3333 4444 5555) - -for SEED in "${seeds[@]}" -do - for SUBJECT_ID in "${subjectids[@]}" - do - for prob in "${validstds[@]}" - do - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - SCORING_DATA=${DATASET_FOLDER}/datasets/${SUBJECT_ID}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - - dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 - reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - - echo $DEST_FOLDER/model/hyperparameters.json - python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/scoring_"${prob}"_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - python scripts/score_tractogram.py \ - $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - "$SCORING_DATA" \ - $validation_folder \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --compute_ic_ib \ - --save_ib \ - --save_vb -f -v - done - done -done diff --git a/scripts/rl_test_tractoinferno.sh b/scripts/rl_test_tractoinferno.sh deleted file mode 100755 index f1a1367..0000000 --- a/scripts/rl_test_tractoinferno.sh +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -seeds=(1111 2222 3333 4444 5555) -noise=(0.1) - -for rng_seed in "${seeds[@]}" -do - - for prob in "${noise[@]}" - do - - experiment_path=${TRACK_TO_LEARN_DATA}/experiments/ - tracking_folder=${experiment_path}/$1/$2/$rng_seed/test_scoring_${noise}_tractoinferno_1006_10 - - data_path=${TRACK_TO_LEARN_DATA}/datasets/tractoinferno/sub-1006 - - mkdir -p $tracking_folder - - python ttl_track.py \ - ${data_path}/fodf/sub-1006__fodf_6_descoteaux.nii.gz \ - ${data_path}/mask/sub-1006__mask_wm.nii.gz \ - ${data_path}/mask/sub-1006__mask_wm.nii.gz \ - ${data_path}/mask/sub-1006__mask_wm.nii.gz \ - ${data_path}/anat/sub-1006__T1w.nii.gz \ - ${experiment_path}/$1/$2/$rng_seed/model \ - ${experiment_path}/$1/$2/$rng_seed/model/hyperparameters.json \ - ${tracking_folder}/tractogram_${prob}_tractoinferno_1006_10.trk \ - --fa_map ${data_path}/dti/sub-1006__fa.nii.gz \ - --npv 10 --n_actor 25000 --compress 0.1 --prob $prob - done -done diff --git a/scripts/rl_test_tractoinferno_exp2.sh b/scripts/rl_test_tractoinferno_exp2.sh deleted file mode 100755 index 0ef3a30..0000000 --- a/scripts/rl_test_tractoinferno_exp2.sh +++ /dev/null @@ -1,33 +0,0 @@ -#!/bin/bash - -seeds=(1111 2222 3333 4444 5555) -noise=(0.1) - -for rng_seed in "${seeds[@]}" -do - - for prob in "${noise[@]}" - do - - echo $1 $2 - - experiment_path=${TRACK_TO_LEARN_DATA}/experiments/ - tracking_folder=${experiment_path}/$1/$2/$rng_seed/scoring_${noise}_tractoinferno_1006_10 - - data_path=${TRACK_TO_LEARN_DATA}/datasets/tractoinferno/sub-1006 - - mkdir -p $tracking_folder - - python ttl_track.py \ - ${data_path}/fodf/sub-1006__fodf_6_descoteaux.nii.gz \ - ${data_path}/mask/sub-1006__mask_wm.nii.gz \ - ${data_path}/maps/sub-1006__interface.nii.gz \ - ${data_path}/mask/sub-1006__mask_wm.nii.gz \ - ${data_path}/anat/sub-1006__T1w.nii.gz \ - ${experiment_path}/$1/$2/$rng_seed/model \ - ${experiment_path}/$1/$2/$rng_seed/model/hyperparameters.json \ - ${tracking_folder}/tractogram_${prob}_tractoinferno_1006_10.trk \ - --fa_map ${data_path}/dti/sub-1006__fa.nii.gz \ - --npv 20 --n_actor 25000 --compress 0.1 --valid $prob - done -done diff --git a/scripts/run_tractometer.sh b/scripts/run_tractometer.sh deleted file mode 100755 index d68fed4..0000000 --- a/scripts/run_tractometer.sh +++ /dev/null @@ -1,12 +0,0 @@ -# $1 Input tractogram -# $2 Scoring data -# $3 Output folder - -python scripts/score_tractogram.py $1 \ - $2 \ - $3 \ - --save_full_vc \ - --save_full_ic \ - --save_full_nc \ - --save_ib \ - --save_vb -f -v diff --git a/scripts/sac_auto_train_asym.sh b/scripts/sac_auto_train_asym.sh deleted file mode 100755 index 9d2c89a..0000000 --- a/scripts/sac_auto_train_asym.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash - -# Script for "Incorporating anatomical priors into Track-to-Learn" - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -mkdir -p $WORK_DATASET_FOLDER - -VALIDATION_SUBJECT_ID=fibercup_asym -SUBJECT_ID=fibercup_asym -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -echo "Transfering data to working folder..." -mkdir -p $WORK_DATASET_FOLDER/datasets/ - -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=500 # Chosen empirically -log_interval=50 # Log at n steps -lr=0.0005 # Learning rate -gamma=0.75 # Gamma for reward discounting -alpha=0.2 - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=25 # Maximum angle for streamline curvature - -EXPERIMENT=SACAutoFiberCupTrain_Asym - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --asymmetric \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_cmc.sh b/scripts/sac_auto_train_cmc.sh deleted file mode 100755 index 170c853..0000000 --- a/scripts/sac_auto_train_cmc.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -# Script for "Incorporating anatomical priors into Track-to-Learn" - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -mkdir -p $WORK_DATASET_FOLDER - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -echo "Transfering data to working folder..." -mkdir -p $WORK_DATASET_FOLDER/datasets/ - -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=500 # Chosen empirically -log_interval=50 # Log at n steps -lr=0.0001 # Learning rate -gamma=0.85 # Gamma for reward discounting - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SACAutoFiberCupTrain_Cmc - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --cmc \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_cmc_asym.sh b/scripts/sac_auto_train_cmc_asym.sh deleted file mode 100755 index 017703c..0000000 --- a/scripts/sac_auto_train_cmc_asym.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -# Script for "Incorporating anatomical priors into Track-to-Learn" - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -mkdir -p $WORK_DATASET_FOLDER - -VALIDATION_SUBJECT_ID=fibercup_asym -SUBJECT_ID=fibercup_asym -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -echo "Transfering data to working folder..." -mkdir -p $WORK_DATASET_FOLDER/datasets/ - -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -max_ep=500 # Chosen empirically -log_interval=50 # Log at n steps -lr=0.0001 # Learning rate -gamma=0.85 # Gamma for reward discounting - -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SACAutoFiberCupTrain_CmcAsym - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --asymmetric \ - --cmc \ - --use_gpu \ - --use_comet \ - --run_tractometer - # --render - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp1_fibercup.sh b/scripts/sac_auto_train_exp1_fibercup.sh deleted file mode 100755 index 13e698f..0000000 --- a/scripts/sac_auto_train_exp1_fibercup.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp1_ismrm2015.sh b/scripts/sac_auto_train_exp1_ismrm2015.sh deleted file mode 100755 index de4edd4..0000000 --- a/scripts/sac_auto_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting -alpha=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp2_fibercup.sh b/scripts/sac_auto_train_exp2_fibercup.sh deleted file mode 100755 index 8c5a421..0000000 --- a/scripts/sac_auto_train_exp2_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp2_ismrm2015.sh b/scripts/sac_auto_train_exp2_ismrm2015.sh deleted file mode 100755 index 336f521..0000000 --- a/scripts/sac_auto_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.75 # Gamma for reward discounting -alpha=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp3_fibercup.sh b/scripts/sac_auto_train_exp3_fibercup.sh deleted file mode 100755 index 0fd57a0..0000000 --- a/scripts/sac_auto_train_exp3_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp3_ismrm2015.sh b/scripts/sac_auto_train_exp3_ismrm2015.sh deleted file mode 100755 index a56f9d4..0000000 --- a/scripts/sac_auto_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting -alpha=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_0dirs_fibercup.sh b/scripts/sac_auto_train_exp4_0dirs_fibercup.sh deleted file mode 100755 index 89423d7..0000000 --- a/scripts/sac_auto_train_exp4_0dirs_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.9 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCup0dirsTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --n_dirs=0 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_0dirs_ismrm2015.sh b/scripts/sac_auto_train_exp4_0dirs_ismrm2015.sh deleted file mode 100755 index b8cfd4b..0000000 --- a/scripts/sac_auto_train_exp4_0dirs_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM20150dirsTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --n_dirs=0 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_2dirs_fibercup.sh b/scripts/sac_auto_train_exp4_2dirs_fibercup.sh deleted file mode 100755 index 0250613..0000000 --- a/scripts/sac_auto_train_exp4_2dirs_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCup2dirsTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --n_dirs=2 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_2dirs_ismrm2015.sh b/scripts/sac_auto_train_exp4_2dirs_ismrm2015.sh deleted file mode 100755 index fec3c78..0000000 --- a/scripts/sac_auto_train_exp4_2dirs_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM20152dirsTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --n_dirs=2 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_nowm_fibercup.sh b/scripts/sac_auto_train_exp4_nowm_fibercup.sh deleted file mode 100755 index 7cc3514..0000000 --- a/scripts/sac_auto_train_exp4_nowm_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup_nowm -SUBJECT_ID=fibercup_nowm -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ -cp -rn ${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID} ${WORK_DATASET_FOLDER}/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.85 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupNoWMTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_nowm_ismrm2015.sh b/scripts/sac_auto_train_exp4_nowm_ismrm2015.sh deleted file mode 100755 index 40f1ec6..0000000 --- a/scripts/sac_auto_train_exp4_nowm_ismrm2015.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015_nowm -SUBJECT_ID=ismrm2015_nowm -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015NoWMTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_raw_fibercup.sh b/scripts/sac_auto_train_exp4_raw_fibercup.sh deleted file mode 100755 index c5ae00a..0000000 --- a/scripts/sac_auto_train_exp4_raw_fibercup.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup_raw -SUBJECT_ID=fibercup_raw -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupRawTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp4_raw_ismrm2015.sh b/scripts/sac_auto_train_exp4_raw_ismrm2015.sh deleted file mode 100755 index b42a39f..0000000 --- a/scripts/sac_auto_train_exp4_raw_ismrm2015.sh +++ /dev/null @@ -1,74 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015_raw -SUBJECT_ID=ismrm2015_raw -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015RawTrainExp4 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_length_0.01_fibercup.sh b/scripts/sac_auto_train_exp5_length_0.01_fibercup.sh deleted file mode 100755 index 6f22edd..0000000 --- a/scripts/sac_auto_train_exp5_length_0.01_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.01Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.01 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh b/scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh deleted file mode 100755 index b784634..0000000 --- a/scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.01Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.01 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_length_0.1_fibercup.sh b/scripts/sac_auto_train_exp5_length_0.1_fibercup.sh deleted file mode 100755 index b341952..0000000 --- a/scripts/sac_auto_train_exp5_length_0.1_fibercup.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$1 - -# -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh b/scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh deleted file mode 100755 index 72c4654..0000000 --- a/scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_length_0.5_fibercup.sh b/scripts/sac_auto_train_exp5_length_0.5_fibercup.sh deleted file mode 100755 index 852b9bd..0000000 --- a/scripts/sac_auto_train_exp5_length_0.5_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength0.5Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh b/scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh deleted file mode 100755 index b035773..0000000 --- a/scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength0.5Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=0.5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_length_1_fibercup.sh b/scripts/sac_auto_train_exp5_length_1_fibercup.sh deleted file mode 100755 index 3bc2b30..0000000 --- a/scripts/sac_auto_train_exp5_length_1_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do - -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/sac_auto_train_exp5_length_1_ismrm2015.sh b/scripts/sac_auto_train_exp5_length_1_ismrm2015.sh deleted file mode 100755 index 164266a..0000000 --- a/scripts/sac_auto_train_exp5_length_1_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_length_5_fibercup.sh b/scripts/sac_auto_train_exp5_length_5_fibercup.sh deleted file mode 100755 index dea6105..0000000 --- a/scripts/sac_auto_train_exp5_length_5_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainLength5Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -rng_seed=$1 - -# for rng_seed in "${seeds[@]}" -# do -# -DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - -python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" -mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ -cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -# done diff --git a/scripts/sac_auto_train_exp5_length_5_ismrm2015.sh b/scripts/sac_auto_train_exp5_length_5_ismrm2015.sh deleted file mode 100755 index 0f64bc8..0000000 --- a/scripts/sac_auto_train_exp5_length_5_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainLength5Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --length_weighting=5 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_100_fibercup.sh b/scripts/sac_auto_train_exp5_target_100_fibercup.sh deleted file mode 100755 index b0f308c..0000000 --- a/scripts/sac_auto_train_exp5_target_100_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.95 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget100Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=100 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_100_ismrm2015.sh b/scripts/sac_auto_train_exp5_target_100_ismrm2015.sh deleted file mode 100755 index bf095e9..0000000 --- a/scripts/sac_auto_train_exp5_target_100_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.95 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget100Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=100 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_10_fibercup.sh b/scripts/sac_auto_train_exp5_target_10_fibercup.sh deleted file mode 100755 index 431b492..0000000 --- a/scripts/sac_auto_train_exp5_target_10_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.9 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget10Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=10 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_10_ismrm2015.sh b/scripts/sac_auto_train_exp5_target_10_ismrm2015.sh deleted file mode 100755 index 57149bd..0000000 --- a/scripts/sac_auto_train_exp5_target_10_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.75 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget10Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=10 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_1_fibercup.sh b/scripts/sac_auto_train_exp5_target_1_fibercup.sh deleted file mode 100755 index aa07f6f..0000000 --- a/scripts/sac_auto_train_exp5_target_1_fibercup.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_FiberCupTrainTarget1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_auto_train_exp5_target_1_ismrm2015.sh b/scripts/sac_auto_train_exp5_target_1_ismrm2015.sh deleted file mode 100755 index 149981c..0000000 --- a/scripts/sac_auto_train_exp5_target_1_ismrm2015.sh +++ /dev/null @@ -1,75 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_Auto_ISMRM2015TrainTarget1Exp5 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_auto_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --target_bonus_factor=1 \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp1_fibercup.sh b/scripts/sac_train_exp1_fibercup.sh deleted file mode 100755 index ec3365d..0000000 --- a/scripts/sac_train_exp1_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.85 # Gamma for reward discounting -alpha=0.15 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp1_ismrm2015.sh b/scripts/sac_train_exp1_ismrm2015.sh deleted file mode 100755 index 98fdf31..0000000 --- a/scripts/sac_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.75 # Gamma for reward discounting -alpha=0.1 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp2_fibercup.sh b/scripts/sac_train_exp2_fibercup.sh deleted file mode 100755 index c453bd3..0000000 --- a/scripts/sac_train_exp2_fibercup.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.85 # Gamma for reward discounting -alpha=0.1 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp2_ismrm2015.sh b/scripts/sac_train_exp2_ismrm2015.sh deleted file mode 100755 index ce0d539..0000000 --- a/scripts/sac_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.95 # Gamma for reward discounting -alpha=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp3_fibercup.sh b/scripts/sac_train_exp3_fibercup.sh deleted file mode 100755 index 854913f..0000000 --- a/scripts/sac_train_exp3_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.85 # Gamma for reward discounting -alpha=0.075 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/sac_train_exp3_ismrm2015.sh b/scripts/sac_train_exp3_ismrm2015.sh deleted file mode 100755 index c8f9e7a..0000000 --- a/scripts/sac_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.75 # Gamma for reward discounting -alpha=0.1 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=SAC_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/sac_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --alpha=${alpha} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/score_pft.sh b/scripts/score_pft.sh deleted file mode 100755 index 508e68c..0000000 --- a/scripts/score_pft.sh +++ /dev/null @@ -1,84 +0,0 @@ -STEP=0.75 -seeds=(1111 2222 3333 4444 5555) -BASE=${TRACK_TO_LEARN_DATA}/datasets/$DATASET - -DATASET=fibercup -EXPERIMENT=PFT_FiberCup075 -NPV=33 -# -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} - -done - -EXPERIMENT=PFT_FiberCupGM075 -NPV=33 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} -done - -DATASET=fibercup_flipped -EXPERIMENT=PFT_FiberCup075 -BASE=${TRACK_TO_LEARN_DATA}/datasets/$DATASET -NPV=33 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} -done - -DATASET=fibercup_flipped -EXPERIMENT=PFT_FiberCupGM075 -NPV=33 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} - -done - -DATASET=ismrm2015 -EXPERIMENT=PFT_ISMRM2015075 -BASE=${TRACK_TO_LEARN_DATA}/datasets/$DATASET -NPV=7 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} -done - -DATASET=ismrm2015 -EXPERIMENT=PFT_ISMRM2015GM075 - -for seed in "${seeds[@]}"; -do - - OUT_FOLDER=${TRACK_TO_LEARN_DATA}/experiments/${EXPERIMENT}/${seed}/ - - OUT=pft_${DATASET}_${NPV}_${STEP}_${seed}.trk - scripts/score.sh $OUT_FOLDER/$OUT $OUT_FOLDER/scoring_0.0_${DATASET}_${NPV} ${DATASET} -done - diff --git a/scripts/score_tractogram.py b/scripts/score_tractogram.py deleted file mode 100755 index 95bfb3b..0000000 --- a/scripts/score_tractogram.py +++ /dev/null @@ -1,162 +0,0 @@ -#!/usr/bin/env python - -from __future__ import division - -import argparse -import glob -import logging -import os - -from challenge_scoring.io.results import save_results -from challenge_scoring.metrics.scoring import score_submission -from challenge_scoring.utils.attributes import load_attribs -from challenge_scoring.utils.filenames import mkdir - - -DESCRIPTION = """ - Score a submission for the ISMRM 2015 tractography challenge. - - This is based on the ISMRM 2015 tractography challenge, see - http://www.tractometer.org/ismrm_2015_challenge/ - - This script scores a submission following the method presented in - https://doi.org/10.1101/084137 - - This method differs from the classical Tractometer approach - (https://doi.org/10.1016/j.media.2013.03.009). Instead of only using - masks to define the ground truth and classify streamlines in the - submission, bundles are extracted using a bundle recognition technique. - - More details are provided in the documentation here: - https://github.com/scilus/ismrm_2015_tractography_challenge_scoring - - The algorithm has 6 main steps: - 1: extract all streamlines that are valid, which are classified as - Valid Connections (VC) making up Valid Bundles (VB). - 2: remove streamlines shorter than an threshold based on the GT dataset - 3: cluster the remaining streamlines - 4: remove singletons - 5: assign each cluster to the closest ROIs pair. Those make up the - Invalid Connections (IC), grouped as Invalid Bundles (IB). - 6: streamlines that are neither in VC nor IC are classified as - No Connection (NC). -""" - - -def build_args_parser(): - p = argparse.ArgumentParser(description=DESCRIPTION, - formatter_class=argparse.RawTextHelpFormatter) - - p.add_argument('tractogram', metavar='TRACTS', - help='Tractogram file. File must be tck or trk.') - p.add_argument('base_dir', metavar='BASE_DIR', - help='base directory for scoring data.\n' - 'See www.tractometer.org/downloads/downloads/' - 'scoring_data_tractography_challenge.tar.gz') - p.add_argument('out_dir', metavar='OUT_DIR', - help='directory where to send score files') - p.add_argument('--out_tract_type', choices=['tck', 'trk'], default='tck', - help='output type for tracts') - p.add_argument('--save_full_vc', action='store_true', - help='save one file containing all VCs') - p.add_argument('--save_full_ic', action='store_true', - help='save one file containing all ICs') - p.add_argument('--save_full_nc', action='store_true', - help='save one file containing all NCs') - p.add_argument('--compute_ic_ib', action='store_true', - help="Segment rejected streamlines into NC + IC.\n" - "Else, all non-vb streamlines are stored as NC.") - p.add_argument('--save_ib', action='store_true', - help='save IB independently.') - p.add_argument('--save_vb', action='store_true', - help='save VB independently.') - p.add_argument('-f', dest='force', action='store_true', - required=False, help='overwrite output files') - p.add_argument('-v', dest='verbose', action='store_true', - required=False, help='produce verbose output') - - return p - - -def main(): - parser = build_args_parser() - args = parser.parse_args() - - tractogram = args.tractogram - base_dir = args.base_dir - out_dir = args.out_dir - - if args.verbose: - logging.basicConfig(level=logging.DEBUG) - - if not os.path.isfile(tractogram): - parser.error('"{0}" must be a file!'.format(tractogram)) - - _, ext = os.path.splitext(tractogram) - if not (ext == '.tck' or ext == '.trk'): - parser.error("Tractogram file should be a .tck or .trk, not {}" - .format(ext)) - - if not os.path.isdir(base_dir): - parser.error('"{0}" must be a directory!'.format(base_dir)) - - scores_dir = mkdir(os.path.join(out_dir, "scores")) - scores_filename = os.path.join(scores_dir, - os.path.splitext( - os.path.basename(tractogram))[0] - + ".json") - - score_exists = False - segmented_files = [] - - # Check if some results already exist - if os.path.isfile(scores_filename): - score_exists = True - - segments_dir = '' - base_name = '' - - if args.save_full_vc or args.save_full_ic or args.save_ib or args.save_vb \ - or args.save_full_nc: - segments_dir = mkdir(os.path.join(out_dir, "segmented")) - base_name = os.path.splitext(os.path.basename(tractogram))[0] - - segmented_files = glob.glob(os.path.join( - segments_dir, '{}*.{}'.format(base_name, args.out_tract_type))) - - if score_exists or len(segmented_files): - if not args.force: - parser.error( - 'Scores file or segmented files already exist.' - '\nPlease remove or use -f to overwrite.') - else: - if score_exists: - os.remove(scores_filename) - for f in segmented_files: - os.remove(f) - - # Basic bundle attributes should be stored in the scoring data directory. - gt_bundles_attribs_path = os.path.join(args.base_dir, - 'gt_bundles_attributes.json') - if not os.path.isfile(gt_bundles_attribs_path): - parser.error('Missing the "gt_bundles_attributes.json" file in the ' - 'provided base directory.') - - basic_bundles_attribs = load_attribs(gt_bundles_attribs_path) - - scores = score_submission(tractogram, base_dir, basic_bundles_attribs, - args.save_full_vc, - args.save_full_ic, - args.save_full_nc, - args.compute_ic_ib, - args.save_ib, args.save_vb, - segments_dir, base_name, - args.out_tract_type, args.verbose) - - if scores is not None: - print("Saving results in {}".format(scores_filename)) - save_results(scores_filename, scores) - - -if __name__ == "__main__": - main() diff --git a/scripts/td3_train_exp1_fibercup.sh b/scripts/td3_train_exp1_fibercup.sh deleted file mode 100755 index fb58103..0000000 --- a/scripts/td3_train_exp1_fibercup.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.25 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/td3_train_exp1_ismrm2015.sh b/scripts/td3_train_exp1_ismrm2015.sh deleted file mode 100755 index b9d5385..0000000 --- a/scripts/td3_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,76 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.85 # Gamma for reward discounting -action_std=0.4 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/td3_train_exp2_fibercup.sh b/scripts/td3_train_exp2_fibercup.sh deleted file mode 100755 index 21f8445..0000000 --- a/scripts/td3_train_exp2_fibercup.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/td3_train_exp2_ismrm2015.sh b/scripts/td3_train_exp2_ismrm2015.sh deleted file mode 100755 index 5604822..0000000 --- a/scripts/td3_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.00005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/td3_train_exp3_fibercup.sh b/scripts/td3_train_exp3_fibercup.sh deleted file mode 100755 index f741e06..0000000 --- a/scripts/td3_train_exp3_fibercup.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.2 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/td3_train_exp3_ismrm2015.sh b/scripts/td3_train_exp3_ismrm2015.sh deleted file mode 100755 index d015086..0000000 --- a/scripts/td3_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.25 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=60 # Maximum angle for streamline curvature - -EXPERIMENT=TD3_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/td3_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/track.sh b/scripts/track.sh deleted file mode 100755 index 2a82886..0000000 --- a/scripts/track.sh +++ /dev/null @@ -1,68 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder - -n_actor=10000 -npv=10 -min_length=20 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -# SEED=1111 -SUBJECT_ID=hcp_100206 -prob=0.1 - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/anat/${SUBJECT_ID}_t1.nii.gz -signal_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/hcp_100206_signal.nii.gz -peaks_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/fodfs/hcp_100206_peaks.nii.gz -in_seed=$DATASET_FOLDER/datasets/${SUBJECT_ID}/maps/hcp_100206_interface.nii.gz -in_mask=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/hcp_100206_wm.nii.gz -target_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/hcp_100206_gm.nii.gz -exclude_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/hcp_100206_csf.nii.gz - -seeds=(1111 2222 3333 4444 5555) -for SEED in "${seeds[@]}" -do - - EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - out_tractogram="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED"/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk - - python ttl_track.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${signal_file}" \ - "${peaks_file}" \ - "${in_seed}" \ - "${in_mask}" \ - "${target_file}" \ - "${exclude_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --out_tractogram="${out_tractogram}" \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --interface_seeding \ - --remove_invalid_streamlines - - validation_folder=$DEST_FOLDER/tracking_"${prob}"_"${SUBJECT_ID}" - - mkdir -p $validation_folder - - mv $out_tractogram $validation_folder/ -done - diff --git a/scripts/track_dataset.sh b/scripts/track_dataset.sh deleted file mode 100755 index e7c009a..0000000 --- a/scripts/track_dataset.sh +++ /dev/null @@ -1,53 +0,0 @@ -#!/usr/bin/env bash - -set -e - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Data params -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -n_actor=50000 -npv=1 -min_length=10 -max_length=200 - -EXPERIMENT=$1 -ID=$2 - -SEED=1111 -SUBJECT_ID=hcp_100206 -prob=0.2 - -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$SEED" - -dataset_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -reference_file=$DATASET_FOLDER/datasets/${SUBJECT_ID}/masks/${SUBJECT_ID}_wm.nii.gz - -python ttl_validation.py \ - "$DEST_FOLDER" \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - $DEST_FOLDER/model \ - $DEST_FOLDER/model/hyperparameters.json \ - --prob="${prob}" \ - --npv="${npv}" \ - --n_actor="${n_actor}" \ - --min_length="$min_length" \ - --max_length="$max_length" \ - --use_gpu \ - --fa_map="$DATASET_FOLDER"/datasets/${SUBJECT_ID}/dti/"${SUBJECT_ID}"_fa.nii.gz \ - --remove_invalid_streamlines - -validation_folder=$DEST_FOLDER/tracking_"${prob}"_"${SUBJECT_ID}" - -mkdir -p $validation_folder - -mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ diff --git a/scripts/track_exp1.sh b/scripts/track_exp1.sh deleted file mode 100755 index 95d62de..0000000 --- a/scripts/track_exp1.sh +++ /dev/null @@ -1,32 +0,0 @@ -# FiberCup -> FiberCup/Flipped -./scripts/rl_test_fibercup.sh VPG_FiberCupTrainExp1 2023-01-25-11_22_09 -./scripts/rl_test_fibercup.sh A2C_FiberCupTrainExp1 2023-01-31-10_25_53 -./scripts/rl_test_fibercup.sh ACKTR_FiberCupTrainExp1 2023-01-30-09_41_22 -./scripts/rl_test_fibercup.sh TRPO_FiberCupTrainExp1 2023-01-30-09_43_06 -./scripts/rl_test_fibercup.sh PPO_FiberCupTrainExp1 2023-02-06-08_14_45 -./scripts/rl_test_fibercup.sh DDPG_FiberCupTrainExp1 2023-02-03-09_23_37 -./scripts/rl_test_fibercup.sh TD3_FiberCupTrainExp1 2023-03-21-09_24_20 -./scripts/rl_test_fibercup.sh SAC_FiberCupTrainExp1 2023-02-03-09_35_53 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainExp1 2023-02-04-08_22_54 - -# ISMRM2015TrainExp1 -> ISMRM2015TrainExp1 -./scripts/rl_test_ismrm2015.sh VPG_ISMRM2015TrainExp1 2023-02-04-15_12_33 -./scripts/rl_test_ismrm2015.sh A2C_ISMRM2015TrainExp1 2023-02-04-17_52_22 -./scripts/rl_test_ismrm2015.sh TRPO_ISMRM2015TrainExp1 2023-02-14-11_46_30 -./scripts/rl_test_ismrm2015.sh ACKTR_ISMRM2015TrainExp1 2023-02-05-11_23_31 -./scripts/rl_test_ismrm2015.sh PPO_ISMRM2015TrainExp1 2023-02-21-10_20_31 -./scripts/rl_test_ismrm2015.sh DDPG_ISMRM2015TrainExp1 2023-02-06-08_18_44 -./scripts/rl_test_ismrm2015.sh TD3_ISMRM2015TrainExp1 2023-02-05-17_02_16 -./scripts/rl_test_ismrm2015.sh SAC_ISMRM2015TrainExp1 2023-02-06-08_19_31 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainExp1 2023-02-07-11_16_14 - -# ISMRM2015TrainExp1 -> Tractoinferno -./scripts/rl_test_tractoinferno.sh VPG_ISMRM2015TrainExp1 2023-02-04-15_12_33 -./scripts/rl_test_tractoinferno.sh A2C_ISMRM2015TrainExp1 2023-02-04-17_52_22 -./scripts/rl_test_tractoinferno.sh TRPO_ISMRM2015TrainExp1 2023-02-14-11_46_30 -./scripts/rl_test_tractoinferno.sh ACKTR_ISMRM2015TrainExp1 2023-02-05-11_23_31 -./scripts/rl_test_tractoinferno.sh PPO_ISMRM2015TrainExp1 2023-02-21-10_20_31 -./scripts/rl_test_tractoinferno.sh DDPG_ISMRM2015TrainExp1 2023-02-06-08_18_44 -./scripts/rl_test_tractoinferno.sh TD3_ISMRM2015TrainExp1 2023-02-05-17_02_16 -./scripts/rl_test_tractoinferno.sh SAC_ISMRM2015TrainExp1 2023-02-06-08_19_31 -./scripts/rl_test_tractoinferno.sh SAC_Auto_ISMRM2015TrainExp1 2023-02-07-11_16_14 diff --git a/scripts/track_exp2.sh b/scripts/track_exp2.sh deleted file mode 100755 index 27a23c5..0000000 --- a/scripts/track_exp2.sh +++ /dev/null @@ -1,32 +0,0 @@ -# FiberCup -> FiberCup/Flipped -./scripts/rl_test_fibercup_exp2.sh VPG_FiberCupTrainExp2 2023-02-14-11_26_41 -./scripts/rl_test_fibercup_exp2.sh A2C_FiberCupTrainExp2 2023-02-14-11_26_41 -./scripts/rl_test_fibercup_exp2.sh ACKTR_FiberCupTrainExp2 2023-02-14-11_26_41 -./scripts/rl_test_fibercup_exp2.sh TRPO_FiberCupTrainExp2 2023-02-14-14_53_43 -./scripts/rl_test_fibercup_exp2.sh PPO_FiberCupTrainExp2 2023-03-01-16_02_18 -./scripts/rl_test_fibercup_exp2.sh DDPG_FiberCupTrainExp2 2023-02-14-22_05_49 -./scripts/rl_test_fibercup_exp2.sh TD3_FiberCupTrainExp2 2023-02-15-01_54_07 -./scripts/rl_test_fibercup_exp2.sh SAC_FiberCupTrainExp2 2023-02-15-23_27_33 -./scripts/rl_test_fibercup_exp2.sh SAC_Auto_FiberCupTrainExp2 2023-02-16-04_22_12 - -# ISMRM2015 -> ISMRM2015 -./scripts/rl_test_ismrm2015_exp2.sh VPG_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_ismrm2015_exp2.sh A2C_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_ismrm2015_exp2.sh ACKTR_ISMRM2015TrainExp2 2023-02-21-09_58_36 -./scripts/rl_test_ismrm2015_exp2.sh TRPO_ISMRM2015TrainExp2 2023-02-14-13_11_18 -./scripts/rl_test_ismrm2015_exp2.sh PPO_ISMRM2015TrainExp2 2023-03-06-12_22_46 -./scripts/rl_test_ismrm2015_exp2.sh DDPG_ISMRM2015TrainExp2 2023-02-15-04_15_36 -./scripts/rl_test_ismrm2015_exp2.sh TD3_ISMRM2015TrainExp2 2023-02-15-12_59_38 -./scripts/rl_test_ismrm2015_exp2.sh SAC_ISMRM2015TrainExp2 2023-02-16-10_04_08 -./scripts/rl_test_ismrm2015_exp2.sh SAC_Auto_ISMRM2015TrainExp2 2023-02-21-17_27_47 - -# ISMRM2015 -> HCP -./scripts/rl_test_hcp_exp2.sh VPG_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_hcp_exp2.sh A2C_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_hcp_exp2.sh ACKTR_ISMRM2015TrainExp2 2023-02-21-09_58_36 -./scripts/rl_test_hcp_exp2.sh TRPO_ISMRM2015TrainExp2 2023-02-14-13_11_18 -./scripts/rl_test_hcp_exp2.sh PPO_ISMRM2015TrainExp2 2023-03-06-12_22_46 -./scripts/rl_test_hcp_exp2.sh DDPG_ISMRM2015TrainExp2 2023-02-15-04_15_36 -./scripts/rl_test_hcp_exp2.sh TD3_ISMRM2015TrainExp2 2023-02-15-12_59_38 -./scripts/rl_test_hcp_exp2.sh SAC_ISMRM2015TrainExp2 2023-02-16-10_04_08 -./scripts/rl_test_hcp_exp2.sh SAC_Auto_ISMRM2015TrainExp2 2023-02-21-17_27_47 diff --git a/scripts/track_exp3.sh b/scripts/track_exp3.sh deleted file mode 100755 index e0bb80a..0000000 --- a/scripts/track_exp3.sh +++ /dev/null @@ -1,33 +0,0 @@ -# FiberCup -> FiberCup/Flipped -./scripts/rl_test_fibercup_exp3.sh VPG_FiberCupTrainExp3 2023-03-07-11_27_13 -./scripts/rl_test_fibercup_exp3.sh A2C_FiberCupTrainExp3 2023-03-07-11_27_13 -./scripts/rl_test_fibercup_exp3.sh ACKTR_FiberCupTrainExp3 2023-03-07-17_36_14 -./scripts/rl_test_fibercup_exp3.sh TRPO_FiberCupTrainExp3 2023-03-07-17_46_47 -./scripts/rl_test_fibercup_exp3.sh PPO_FiberCupTrainExp3 2023-03-11-08_25_41 -./scripts/rl_test_fibercup_exp3.sh DDPG_FiberCupTrainExp3 2023-03-08-05_36_48 -./scripts/rl_test_fibercup_exp3.sh TD3_FiberCupTrainExp3 2023-03-08-22_49_09 -./scripts/rl_test_fibercup_exp3.sh SAC_FiberCupTrainExp3 2023-03-09-02_27_37 -./scripts/rl_test_fibercup_exp3.sh SAC_Auto_FiberCupTrainExp3 2023-03-11-08_28_51 - -# ISMRM2015TrainExp3 > ISMRM2015TrainExp3 -./scripts/rl_test_ismrm2015_exp3.sh VPG_ISMRM2015TrainExp3 2023-03-08-09_02_50 -./scripts/rl_test_ismrm2015_exp3.sh A2C_ISMRM2015TrainExp3 2023-03-08-09_02_50 -./scripts/rl_test_ismrm2015_exp3.sh TRPO_ISMRM2015TrainExp3 2023-03-08-18_30_36 -./scripts/rl_test_ismrm2015_exp3.sh ACKTR_ISMRM2015TrainExp3 2023-03-12-09_56_26 -./scripts/rl_test_ismrm2015_exp3.sh PPO_ISMRM2015TrainExp3 2023-03-08-23_46_07 -./scripts/rl_test_ismrm2015_exp3.sh DDPG_ISMRM2015TrainExp3 2023-03-09-23_40_20 -./scripts/rl_test_ismrm2015_exp3.sh TD3_ISMRM2015TrainExp3 2023-03-10-20_35_42 -./scripts/rl_test_ismrm2015_exp3.sh SAC_ISMRM2015TrainExp3 2023-03-10-21_19_29 -./scripts/rl_test_ismrm2015_exp3.sh SAC_Auto_ISMRM2015TrainExp3 2023-03-12-10_07_05 - -# ISMRM2015TrainExp3 > HCP -./scripts/rl_test_hcp_exp3.sh VPG_ISMRM2015TrainExp3 2023-03-08-09_02_50 -./scripts/rl_test_hcp_exp3.sh A2C_ISMRM2015TrainExp3 2023-03-08-09_02_50 -./scripts/rl_test_hcp_exp3.sh TRPO_ISMRM2015TrainExp3 2023-03-08-18_30_36 -./scripts/rl_test_hcp_exp3.sh ACKTR_ISMRM2015TrainExp3 2023-03-12-09_56_26 -./scripts/rl_test_hcp_exp3.sh PPO_ISMRM2015TrainExp3 2023-03-08-23_46_07 -./scripts/rl_test_hcp_exp3.sh DDPG_ISMRM2015TrainExp3 2023-03-09-23_40_20 -./scripts/rl_test_hcp_exp3.sh TD3_ISMRM2015TrainExp3 2023-03-10-20_35_42 -./scripts/rl_test_hcp_exp3.sh SAC_ISMRM2015TrainExp3 2023-03-10-21_19_29 -./scripts/rl_test_hcp_exp3.sh SAC_Auto_ISMRM2015TrainExp3 2023-03-12-10_07_05 - diff --git a/scripts/track_exp4.sh b/scripts/track_exp4.sh deleted file mode 100755 index 3b3efe5..0000000 --- a/scripts/track_exp4.sh +++ /dev/null @@ -1,11 +0,0 @@ -# FiberCup -> FiberCup/Flipped -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCup0dirsTrainExp4 2023-03-20-09_30_25 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCup2dirsTrainExp4 2023-03-20-14_11_25 -./scripts/rl_test_fibercup_nowm.sh SAC_Auto_FiberCupNoWMTrainExp4 2023-03-22-07_38_30 -./scripts/rl_test_fibercup_raw.sh SAC_Auto_FiberCupRawTrainExp4 2023-03-22-07_39_28 - -# ISMRM2015 -> ISMRM2015 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM20150dirsTrainExp4 2023-03-19-21_13_07 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM20152dirsTrainExp4 2023-03-19-21_13_09 -./scripts/rl_test_ismrm2015_nowm.sh SAC_Auto_ISMRM2015NoWMTrainExp4 2023-03-19-21_07_44 -./scripts/rl_test_ismrm2015_raw.sh SAC_Auto_ISMRM2015RawTrainExp4 2023-03-19-21_07_54 diff --git a/scripts/track_exp5.sh b/scripts/track_exp5.sh deleted file mode 100644 index ac98c71..0000000 --- a/scripts/track_exp5.sh +++ /dev/null @@ -1,21 +0,0 @@ -# FiberCup -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainTarget1Exp5 35975788 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainTarget10Exp5 35975793 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainTarget100Exp5 35987143 - -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainLength0.01Exp5 35975610 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainLength0.1Exp5 35975615 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainLength0.5Exp5 35975620 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainLength1Exp5 35975625 -./scripts/rl_test_fibercup.sh SAC_Auto_FiberCupTrainLength5Exp5 35975650 - -# ISMRM2015 -./scripts/rl_test_fibercup.sh SAC_Auto_ISMRM2015TrainTarget1Exp5 35937624 -./scripts/rl_test_fibercup.sh SAC_Auto_ISMRM2015TrainTarget10Exp5 35935713 -./scripts/rl_test_fibercup.sh SAC_Auto_ISMRM2015TrainTarget100Exp5 35938064 - -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainLength0.01Exp5 63681654 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainLength0.1Exp5 63725307 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainLength0.5Exp5 63726565 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainLength1Exp5 35975625 -./scripts/rl_test_ismrm2015.sh SAC_Auto_ISMRM2015TrainLength5Exp5 35975650 diff --git a/scripts/track_pft.sh b/scripts/track_pft.sh deleted file mode 100755 index b86fcb4..0000000 --- a/scripts/track_pft.sh +++ /dev/null @@ -1,469 +0,0 @@ - -# This should point to your dataset folder -DATASET_FOLDER=${TRACK_TO_LEARN_DATA} - -# Should be relatively stable -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments - -step_size=0.75 # Step size (in mm) - -npv=33 -min_length=20 -max_length=200 - -STEP=0.75 -MIN=20 -MAX=200 -seeds=(1111 2222 3333 4444 5555) - -# SUBJECT_ID=fibercup -# EXPERIMENT=PFT_FiberCupExp1 -# npv=33 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/masks/${SUBJECT_ID}_wm.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# EXPERIMENT=PFT_FiberCupExp2 -# npv=300 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/maps/interface.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# SUBJECT_ID=fibercup_flipped -# EXPERIMENT=PFT_FiberCupExp1 -# BASE=${TRACK_TO_LEARN_DATA}/datasets/$SUBJECT_ID -# npv=33 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/masks/${SUBJECT_ID}_wm.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# SUBJECT_ID=fibercup_flipped -# EXPERIMENT=PFT_FiberCupExp2 -# npv=300 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/maps/interface.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# SUBJECT_ID=ismrm2015 -# EXPERIMENT=PFT_ISMRM2015Exp1 -# BASE=${TRACK_TO_LEARN_DATA}/datasets/$SUBJECT_ID -# npv=7 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/masks/${SUBJECT_ID}_wm.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# SUBJECT_ID=ismrm2015 -# EXPERIMENT=PFT_ISMRM2015Exp2 -# npv=60 -# -# ID=$(date +"%F-%H_%M_%S") -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/maps/interface.nii.gz \ -# $BASE/maps/map_include.nii.gz \ -# $BASE/maps/map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# python scripts/score_tractogram.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# "$SCORING_DATA" \ -# $validation_folder \ -# --compute_ic_ib \ -# --save_full_vc \ -# --save_full_ic \ -# --save_full_nc \ -# --save_ib \ -# --save_vb -f -v -# -# done -# -# STEP=0.375 -# SUBJECT_ID=hcp_100206 -# EXPERIMENT=PFT_ISMRM2015Exp1 -# BASE=${TRACK_TO_LEARN_DATA}/datasets/$SUBJECT_ID -# npv=2 -# -# ID=2023-02-24-17_45_03 -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/masks/${SUBJECT_ID}_wm.nii.gz \ -# $BASE/maps/${SUBJECT_ID}_map_include.nii.gz \ -# $BASE/maps/${SUBJECT_ID}_map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# scil_recognize_multi_bundles.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# $SCORING_DATA/config/config_ind.json \ -# $SCORING_DATA/atlas/*/ \ -# $SCORING_DATA/output0GenericAffine.mat \ -# --out $validation_folder/voting_results \ -# -f --log_level DEBUG --multi_parameters 27 \ -# --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ -# --processes 4 --seeds 0 -# -# done -# -# SUBJECT_ID=hcp_100206 -# EXPERIMENT=PFT_ISMRM2015Exp2 -# npv=10 -# -# ID=2023-02-24-21_08_25 -# -# for seed in "${seeds[@]}"; -# do -# -# OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk -# -# BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} -# SCORING_DATA=${BASE}/scoring_data -# DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" -# mkdir -p $DEST_FOLDER -# -# scil_compute_pft.py \ -# $BASE/fodfs/${SUBJECT_ID}_fodf.nii.gz \ -# $BASE/maps/${SUBJECT_ID}_interface.nii.gz \ -# $BASE/maps/${SUBJECT_ID}_map_include.nii.gz \ -# $BASE/maps/${SUBJECT_ID}_map_exclude.nii.gz \ -# $DEST_FOLDER/$OUT \ -# --npv $npv --min_length $MIN \ -# --max_length $MAX --step $STEP \ -# --seed $seed -f -v -# -# validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} -# -# mkdir -p $validation_folder -# -# mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ -# -# scil_recognize_multi_bundles.py \ -# $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ -# $SCORING_DATA/config/config_ind.json \ -# $SCORING_DATA/atlas/*/ \ -# $SCORING_DATA/output0GenericAffine.mat \ -# --out $validation_folder/voting_results \ -# -f --log_level DEBUG --multi_parameters 27 \ -# --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ -# --processes 4 --seeds 0 -# -# done - -STEP=0.375 -SUBJECT_ID=sub-1006 -EXPERIMENT=PFT_ISMRM2015Exp1 -BASE=${TRACK_TO_LEARN_DATA}/datasets/tractoinferno/$SUBJECT_ID -npv=10 - -ID=2023-02-24-17_45_03 - -for seed in "${seeds[@]}"; -do - - OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk - - BASE=${DATASET_FOLDER}/datasets/tractoinferno/${SUBJECT_ID} - SCORING_DATA=${BASE}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" - mkdir -p $DEST_FOLDER - - scil_compute_pft.py \ - $BASE/fodf/${SUBJECT_ID}__fodf.nii.gz \ - $BASE/mask/${SUBJECT_ID}__mask_wm.nii.gz \ - $BASE/maps/${SUBJECT_ID}__map_include.nii.gz \ - $BASE/maps/${SUBJECT_ID}__map_exclude.nii.gz \ - $DEST_FOLDER/$OUT \ - --npv $npv --min_length $MIN \ - --max_length $MAX --step $STEP \ - --seed $seed -f -v - - validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - # scil_recognize_multi_bundles.py \ - # $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - # $SCORING_DATA/config/config_ind.json \ - # $SCORING_DATA/atlas/*/ \ - # $SCORING_DATA/output0GenericAffine.mat \ - # --out $validation_folder/voting_results \ - # -f --log_level DEBUG --multi_parameters 27 \ - # --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - # --processes 4 --seeds 0 - -done - -SUBJECT_ID=sub-1006 -EXPERIMENT=PFT_ISMRM2015Exp2 -npv=20 - -ID=2023-02-24-21_08_25 - -for seed in "${seeds[@]}"; -do - - OUT=tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk - - BASE=${DATASET_FOLDER}/datasets/${SUBJECT_ID} - SCORING_DATA=${BASE}/scoring_data - DEST_FOLDER="$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$seed" - mkdir -p $DEST_FOLDER - - scil_compute_pft.py \ - $BASE/fodf/${SUBJECT_ID}__fodf.nii.gz \ - $BASE/maps/${SUBJECT_ID}__interface.nii.gz \ - $BASE/maps/${SUBJECT_ID}__map_include.nii.gz \ - $BASE/maps/${SUBJECT_ID}__map_exclude.nii.gz \ - $DEST_FOLDER/$OUT \ - --npv $npv --min_length $MIN \ - --max_length $MAX --step $STEP \ - --seed $seed -f -v - - validation_folder=$DEST_FOLDER/scoring_"${SUBJECT_ID}"_${npv} - - mkdir -p $validation_folder - - mv $DEST_FOLDER/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk $validation_folder/ - - # scil_recognize_multi_bundles.py \ - # $validation_folder/tractogram_"${EXPERIMENT}"_"${ID}"_"${SUBJECT_ID}".trk \ - # $SCORING_DATA/config/config_ind.json \ - # $SCORING_DATA/atlas/*/ \ - # $SCORING_DATA/output0GenericAffine.mat \ - # --out $validation_folder/voting_results \ - # -f --log_level DEBUG --multi_parameters 27 \ - # --minimal_vote 0.4 --tractogram_clustering 8 10 12 \ - # --processes 4 --seeds 0 - -done - diff --git a/scripts/track_tractoinferno_exp1.sh b/scripts/track_tractoinferno_exp1.sh deleted file mode 100755 index d70bbb4..0000000 --- a/scripts/track_tractoinferno_exp1.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/bin/bash - -./scripts/rl_test_tractoinferno.sh VPG_ISMRM2015TrainExp1 2023-02-04-15_12_33 -./scripts/rl_test_tractoinferno.sh A2C_ISMRM2015TrainExp1 2023-02-04-17_52_22 -./scripts/rl_test_tractoinferno.sh ACKTR_ISMRM2015TrainExp1 2023-02-05-11_23_31 -./scripts/rl_test_tractoinferno.sh TRPO_ISMRM2015TrainExp1 2023-02-14-11_46_30 -./scripts/rl_test_tractoinferno.sh PPO_ISMRM2015TrainExp1 2023-02-21-10_20_31 -./scripts/rl_test_tractoinferno.sh DDPG_ISMRM2015TrainExp1 2023-02-06-08_18_44 -./scripts/rl_test_tractoinferno.sh TD3_ISMRM2015TrainExp1 2023-02-05-17_02_16 -./scripts/rl_test_tractoinferno.sh SAC_ISMRM2015TrainExp1 2023-02-06-08_19_31 -./scripts/rl_test_tractoinferno.sh SAC_Auto_ISMRM2015TrainExp1 2023-02-07-11_16_14 diff --git a/scripts/track_tractoinferno_exp2.sh b/scripts/track_tractoinferno_exp2.sh deleted file mode 100755 index 6c270fa..0000000 --- a/scripts/track_tractoinferno_exp2.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/bin/bash - -./scripts/rl_test_tractoinferno_exp2.sh VPG_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_tractoinferno_exp2.sh A2C_ISMRM2015TrainExp2 2023-02-14-11_19_01 -./scripts/rl_test_tractoinferno_exp2.sh ACKTR_ISMRM2015TrainExp2 2023-02-21-09_58_36 -./scripts/rl_test_tractoinferno_exp2.sh TRPO_ISMRM2015TrainExp2 2023-02-14-13_11_18 -./scripts/rl_test_tractoinferno_exp2.sh PPO_ISMRM2015TrainExp2 2023-03-06-12_22_46 -./scripts/rl_test_tractoinferno_exp2.sh DDPG_ISMRM2015TrainExp2 2023-02-15-04_15_36 -./scripts/rl_test_tractoinferno_exp2.sh TD3_ISMRM2015TrainExp2 2023-02-15-12_59_38 -./scripts/rl_test_tractoinferno_exp2.sh SAC_ISMRM2015TrainExp2 2023-02-16-10_04_08 -./scripts/rl_test_tractoinferno_exp2.sh SAC_Auto_ISMRM2015TrainExp2 2023-02-21-17_27_47 diff --git a/scripts/train_exp2_fibercup.sh b/scripts/train_exp2_fibercup.sh deleted file mode 100755 index c57f917..0000000 --- a/scripts/train_exp2_fibercup.sh +++ /dev/null @@ -1,9 +0,0 @@ -./scripts/vpg_train_exp2_fibercup.sh -./scripts/a2c_train_exp2_fibercup.sh -./scripts/acktr_train_exp2_fibercup.sh -./scripts/trpo_train_exp2_fibercup.sh -./scripts/ppo_train_exp2_fibercup.sh -./scripts/ddpg_train_exp2_fibercup.sh -./scripts/td3_train_exp2_fibercup.sh -./scripts/sac_train_exp2_fibercup.sh -./scripts/sac_auto_train_exp2_fibercup.sh diff --git a/scripts/train_exp2_ismrm2015.sh b/scripts/train_exp2_ismrm2015.sh deleted file mode 100755 index 1a482bb..0000000 --- a/scripts/train_exp2_ismrm2015.sh +++ /dev/null @@ -1,9 +0,0 @@ -./scripts/vpg_train_exp2_ismrm2015.sh -./scripts/a2c_train_exp2_ismrm2015.sh -./scripts/acktr_train_exp2_ismrm2015.sh -./scripts/trpo_train_exp2_ismrm2015.sh -./scripts/ppo_train_exp2_ismrm2015.sh -./scripts/ddpg_train_exp2_ismrm2015.sh -./scripts/td3_train_exp2_ismrm2015.sh -./scripts/sac_train_exp2_ismrm2015.sh -./scripts/sac_auto_train_exp2_ismrm2015.sh diff --git a/scripts/train_exp3_fibercup.sh b/scripts/train_exp3_fibercup.sh deleted file mode 100755 index 943d6df..0000000 --- a/scripts/train_exp3_fibercup.sh +++ /dev/null @@ -1,9 +0,0 @@ -./scripts/vpg_train_exp3_fibercup.sh -./scripts/a2c_train_exp3_fibercup.sh -./scripts/acktr_train_exp3_fibercup.sh -./scripts/trpo_train_exp3_fibercup.sh -./scripts/ppo_train_exp3_fibercup.sh -./scripts/ddpg_train_exp3_fibercup.sh -./scripts/td3_train_exp3_fibercup.sh -./scripts/sac_train_exp3_fibercup.sh -./scripts/sac_auto_train_exp3_fibercup.sh diff --git a/scripts/train_exp3_ismrm2015.sh b/scripts/train_exp3_ismrm2015.sh deleted file mode 100755 index a6c8b67..0000000 --- a/scripts/train_exp3_ismrm2015.sh +++ /dev/null @@ -1,9 +0,0 @@ -# ./scripts/vpg_train_exp3_ismrm2015.sh -# ./scripts/a2c_train_exp3_ismrm2015.sh -./scripts/acktr_train_exp3_ismrm2015.sh -# ./scripts/trpo_train_exp3_ismrm2015.sh -# ./scripts/ppo_train_exp3_ismrm2015.sh -# ./scripts/ddpg_train_exp3_ismrm2015.sh -# ./scripts/td3_train_exp3_ismrm2015.sh -# ./scripts/sac_train_exp3_ismrm2015.sh -# ./scripts/sac_auto_train_exp3_ismrm2015.sh diff --git a/scripts/train_exp5_ismrm2015.sh b/scripts/train_exp5_ismrm2015.sh deleted file mode 100644 index 92a4467..0000000 --- a/scripts/train_exp5_ismrm2015.sh +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/bash - -./scripts/sac_auto_train_exp5_target_1_ismrm2015.sh -./scripts/sac_auto_train_exp5_target_10_ismrm2015.sh -./scripts/sac_auto_train_exp5_target_100_ismrm2015.sh - -./scripts/sac_auto_train_exp5_length_0.01_ismrm2015.sh -./scripts/sac_auto_train_exp5_length_0.1_ismrm2015.sh -./scripts/sac_auto_train_exp5_length_0.5_ismrm2015.sh -./scripts/sac_auto_train_exp5_length_1_ismrm2015.sh -./scripts/sac_auto_train_exp5_length_5_ismrm2015.sh diff --git a/scripts/train_exp5_target_fibercup.sh b/scripts/train_exp5_target_fibercup.sh deleted file mode 100644 index e4718b7..0000000 --- a/scripts/train_exp5_target_fibercup.sh +++ /dev/null @@ -1,5 +0,0 @@ -#!/usr/bin/bash - -./scripts/sac_auto_train_exp5_target_1_fibercup.sh -./scripts/sac_auto_train_exp5_target_10_fibercup.sh -./scripts/sac_auto_train_exp5_target_100_fibercup.sh diff --git a/scripts/trpo_train_exp1_fibercup.sh b/scripts/trpo_train_exp1_fibercup.sh deleted file mode 100755 index 1bb23b5..0000000 --- a/scripts/trpo_train_exp1_fibercup.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -delta=0.001 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/trpo_train_exp1_ismrm2015.sh b/scripts/trpo_train_exp1_ismrm2015.sh deleted file mode 100755 index 12bf2f2..0000000 --- a/scripts/trpo_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,84 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/trpo_train_exp2_fibercup.sh b/scripts/trpo_train_exp2_fibercup.sh deleted file mode 100755 index 3412c44..0000000 --- a/scripts/trpo_train_exp2_fibercup.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.001 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/trpo_train_exp2_ismrm2015.sh b/scripts/trpo_train_exp2_ismrm2015.sh deleted file mode 100755 index 35a6677..0000000 --- a/scripts/trpo_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/trpo_train_exp3_fibercup.sh b/scripts/trpo_train_exp3_fibercup.sh deleted file mode 100755 index f6f2a87..0000000 --- a/scripts/trpo_train_exp3_fibercup.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 -delta=0.001 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/trpo_train_exp3_ismrm2015.sh b/scripts/trpo_train_exp3_ismrm2015.sh deleted file mode 100755 index 303e7f3..0000000 --- a/scripts/trpo_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 -delta=0.01 -lmbda=0.95 -K_epochs=5 -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=TRPO_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/trpo_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --delta=${delta} \ - --lmbda=${lmbda} \ - --K_epochs=${K_epochs} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp1_fibercup.sh b/scripts/vpg_train_exp1_fibercup.sh deleted file mode 100755 index ec3b605..0000000 --- a/scripts/vpg_train_exp1_fibercup.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=5.0e-4 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_FiberCupTrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp1_ismrm2015.sh b/scripts/vpg_train_exp1_ismrm2015.sh deleted file mode 100755 index 4112ffc..0000000 --- a/scripts/vpg_train_exp1_ismrm2015.sh +++ /dev/null @@ -1,79 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0001 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_ISMRM2015TrainExp1 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp2_fibercup.sh b/scripts/vpg_train_exp2_fibercup.sh deleted file mode 100755 index 1100b6d..0000000 --- a/scripts/vpg_train_exp2_fibercup.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.75 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=100 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_FiberCupTrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp2_ismrm2015.sh b/scripts/vpg_train_exp2_ismrm2015.sh deleted file mode 100755 index a4af275..0000000 --- a/scripts/vpg_train_exp2_ismrm2015.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.5 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=20 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_ISMRM2015TrainExp2 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --interface_seeding \ - --use_comet \ - --use_gpu \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp3_fibercup.sh b/scripts/vpg_train_exp3_fibercup.sh deleted file mode 100755 index 5143fce..0000000 --- a/scripts/vpg_train_exp3_fibercup.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=fibercup -SUBJECT_ID=fibercup -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.001 # Learning rate -gamma=0.9 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=10 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_FiberCupTrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/scripts/vpg_train_exp3_ismrm2015.sh b/scripts/vpg_train_exp3_ismrm2015.sh deleted file mode 100755 index b039e5e..0000000 --- a/scripts/vpg_train_exp3_ismrm2015.sh +++ /dev/null @@ -1,80 +0,0 @@ -#!/bin/bash - -set -e # exit if any command fails - -DATASET_FOLDER=${TRACK_TO_LEARN_DATA}/ -WORK_DATASET_FOLDER=${LOCAL_TRACK_TO_LEARN_DATA}/ - -VALIDATION_SUBJECT_ID=ismrm2015 -SUBJECT_ID=ismrm2015 -EXPERIMENTS_FOLDER=${DATASET_FOLDER}/experiments -WORK_EXPERIMENTS_FOLDER=${WORK_DATASET_FOLDER}/experiments -SCORING_DATA=${DATASET_FOLDER}/datasets/${VALIDATION_SUBJECT_ID}/scoring_data - -mkdir -p $WORK_DATASET_FOLDER/datasets/${SUBJECT_ID} - -echo "Transfering data to working folder..." -cp -rnv "${DATASET_FOLDER}"/datasets/${VALIDATION_SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ -cp -rnv "${DATASET_FOLDER}"/datasets/${SUBJECT_ID} "${WORK_DATASET_FOLDER}"/datasets/ - -dataset_file=$WORK_DATASET_FOLDER/datasets/${SUBJECT_ID}/${SUBJECT_ID}.hdf5 -validation_dataset_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/${VALIDATION_SUBJECT_ID}.hdf5 -reference_file=$WORK_DATASET_FOLDER/datasets/${VALIDATION_SUBJECT_ID}/masks/${VALIDATION_SUBJECT_ID}_wm.nii.gz - -# RL params -max_ep=1000 # Chosen empirically -log_interval=50 # Log at n episodes -lr=0.0005 # Learning rate -gamma=0.85 # Gamma for reward discounting -action_std=0.0 - -entropy_loss_coeff=0.001 - -# Model params -prob=0.0 # Noise to add to make a prob output. 0 for deterministic - -# Env parameters -npv=2 # Seed per voxel -theta=30 # Maximum angle for streamline curvature - -EXPERIMENT=VPG_ISMRM2015TrainExp3 - -ID=$(date +"%F-%H_%M_%S") - -seeds=(1111 2222 3333 4444 5555) - -for rng_seed in "${seeds[@]}" -do - - DEST_FOLDER="$WORK_EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/"$rng_seed" - - python TrackToLearn/trainers/vpg_train.py \ - $DEST_FOLDER \ - "$EXPERIMENT" \ - "$ID" \ - "${dataset_file}" \ - "${SUBJECT_ID}" \ - "${validation_dataset_file}" \ - "${VALIDATION_SUBJECT_ID}" \ - "${reference_file}" \ - "${SCORING_DATA}" \ - --max_ep=${max_ep} \ - --log_interval=${log_interval} \ - --lr=${lr} \ - --gamma=${gamma} \ - --action_std=${action_std} \ - --entropy_loss_coeff=${entropy_loss_coeff} \ - --rng_seed=${rng_seed} \ - --npv=${npv} \ - --theta=${theta} \ - --no_retrack \ - --use_gpu \ - --use_comet \ - --run_tractometer - - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID" - mkdir -p $EXPERIMENTS_FOLDER/"$EXPERIMENT"/"$ID"/ - cp -f -r $DEST_FOLDER "$EXPERIMENTS_FOLDER"/"$EXPERIMENT"/"$ID"/ - -done diff --git a/setup.py b/setup.py index 699300c..7dcc9b1 100644 --- a/setup.py +++ b/setup.py @@ -1,18 +1,15 @@ +import os +from setuptools import setup -from setuptools import setup, find_packages - -# To use a consistent encoding -from os import path - -here = path.abspath(path.dirname(__file__)) - -# # Get the long description from the relevant file -# with open(path.join(here, 'README.md'), encoding='utf-8') as f: -# long_description = f.read() - -external_dependencies = [] +here = os.path.abspath(os.path.dirname(__file__)) +with open('requirements.txt') as f: + required_dependencies = f.read().splitlines() + external_dependencies = [] + torch_added = False + for dependency in required_dependencies: + external_dependencies.append(dependency) setup( name='Track-to-Learn', @@ -20,13 +17,13 @@ # Versions should comply with PEP440. For a discussion on single-sourcing # the version across setup.py and the project code, see # https://packaging.python.org/en/latest/single_source_version.html - version='0.1', + version='1.0', description='Deep reinforcement learning for tractography', long_description="", # The project's main homepage. - url='https://github.com/scil-vital/TractoRL', + url='https://github.com/scil-vital/TrackToLearn', # Author details author='Antoine Théberge', @@ -55,7 +52,7 @@ # You can just specify the packages manually here if your project is # simple. Or you can use find_packages(). - packages=find_packages(), + packages=['TrackToLearn'], # List run-time dependencies here. These will be installed by pip when # your project is installed. For an analysis of "install_requires" vs pip's @@ -68,7 +65,6 @@ # for example: # $ pip install -e .[dev,test] extras_require={ - 'dev': ["Cython", "numpy", "nibabel", "hdf5"], }, # If there are data files included in your packages that need to be @@ -76,20 +72,15 @@ # have to be included in MANIFEST.in as well. package_data={ }, - + setup_requires=['packaging', 'numpy'], # Although 'package_data' is the preferred approach, in some case you may # need to place data files outside of your packages. See # http://docs.python.org/3.4/distutils/setupscript.html#installing-additional-files # In this case, 'data_file' will be installed into '/my_data' data_files=[ - 'example_models/SAC_Auto_ISMRM2015_WM/', - 'example_models/SAC_Auto_ISMRM2015_interface/', - 'example_models/SAC_Auto_ISMRM2015_WM/hyperparameters.json', - 'example_models/SAC_Auto_ISMRM2015_interface/hyperparameters.json', - 'example_models/SAC_Auto_ISMRM2015_WM/last_model_state_actor.pth', - 'example_models/SAC_Auto_ISMRM2015_WM/last_model_state_critic.pth', - 'example_models/SAC_Auto_ISMRM2015_interface/last_model_state_actor.pth', - 'example_models/SAC_Auto_ISMRM2015_interface/last_model_state_critic.pth', + 'models/last_model_state_critic.pth', + 'models/last_model_state_actor.pth', + 'models/hyperparameters.json', ], # To provide executable scripts, use entry points in preference to the @@ -98,7 +89,7 @@ entry_points={ 'console_scripts': [ "ttl_track.py=TrackToLearn.runners.ttl_track:main", - "ttl_validation.py=TrackToLearn.runners.ttl_validation:main"] + "ttl_track_from_hdf5.py=TrackToLearn.runners.ttl_track_from_hdf5:main"] # noqa E501 }, include_package_data=True, ) diff --git a/tests/placeholder_test.py b/tests/placeholder_test.py deleted file mode 100644 index 77b4e96..0000000 --- a/tests/placeholder_test.py +++ /dev/null @@ -1,7 +0,0 @@ -""" TODO: Add tests in other files -This is just here so that the CI won't fail because there's -no tests -""" - -def test_placeholder(): - pass diff --git a/tests/test_runners.py b/tests/test_runners.py new file mode 100644 index 0000000..89cdc21 --- /dev/null +++ b/tests/test_runners.py @@ -0,0 +1,22 @@ +def test_ttl_track(script_runner): + # Call 'ttl_track.py' from the command line and assert that it + # runs without errors + + ret = script_runner.run('ttl_track.py', '--help') + assert ret.success + +def test_ttl_track_from_hdf5(script_runner): + # Call 'ttl_track_from_hdf5.py' from the command line and assert that it + # runs without errors + + ret = script_runner.run('ttl_track_from_hdf5.py', '--help') + assert ret.success + + +def test_sac_auto_train(script_runner): + # Call 'sac_auto_train.py' from the command line and assert that it + # runs without errors + + ret = script_runner.run('TrackToLearn/trainers/sac_auto_train.py', + '--help') + assert ret.success