diff --git a/.github/workflows/md_link_check.yaml b/.github/workflows/md_link_check.yaml new file mode 100644 index 00000000..ac06556b --- /dev/null +++ b/.github/workflows/md_link_check.yaml @@ -0,0 +1,18 @@ +name: Check markdown links + +on: + push: + branches: [main] + pull_request: + workflow_dispatch: + +jobs: + markdown-link-check: + runs-on: ubuntu-latest + steps: + - uses: actions/checkout@main + - uses: gaurav-nelson/github-action-markdown-link-check@v1 + with: + use-quiet-mode: 'yes' + use-verbose-mode: 'yes' + config-file: 'md_link_check_config.json' \ No newline at end of file diff --git a/docs/nipoppy/configs.md b/docs/nipoppy/configs.md index aefe933d..b60df08e 100644 --- a/docs/nipoppy/configs.md +++ b/docs/nipoppy/configs.md @@ -9,7 +9,7 @@ Nipoppy requires two global files for specifying local data/container paths and ### Global configs: `global_configs.json` - This is a dataset-specific file and needs to be modified based on local configs and paths - This file is used as an input to all workflow runscripts to read, process and track available data - - Copy, rename, and populate [sample_global_configs.json](https://github.com/neurodatascience/nipoppy/blob/main/sample_global_configs.json) + - Copy, rename, and populate [sample_global_configs.json](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/sample_global_configs.json) - This file contains: - Name of the Nipoppy dataset (`DATASET_NAME`, e.g., `PPMI`) - Path to the Nipoppy dataset (`DATASET_ROOT`) diff --git a/docs/nipoppy/workflow/bids_conv.md b/docs/nipoppy/workflow/bids_conv.md index 03280565..781c6587 100644 --- a/docs/nipoppy/workflow/bids_conv.md +++ b/docs/nipoppy/workflow/bids_conv.md @@ -17,7 +17,7 @@ Convert DICOMs to BIDS using [Heudiconv](https://heudiconv.readthedocs.io/en/lat ### Procedure 1. Ensure you have the appropriate HeuDiConv container listed in your `global_configs.json` -2. Use [run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`. +2. Use [run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`. - Run `stage_1` to generate a list of available protocols from the DICOM header. These protocols are listed in `/bids/.heudiconv//info/dicominfo_ses-.tsv` > Sample cmd: @@ -32,7 +32,7 @@ python run_bids_conv.py \ If participants have multiple sessions (or visits), these need to be converted separately and combined post-hoc to avoid Heudiconv errors. -3. Copy+Rename [sample_heuristic.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/bids_conv/sample_heuristic.py) to `heuristic.py` in the code repo itself. Then edit `./heuristic.py` to create a name-mapping (i.e. dictionary) for BIDS organization based on the list of available protocols. +3. Copy+Rename [sample_heuristic.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/sample_heuristic.py) to `heuristic.py` in the code repo itself. Then edit `./heuristic.py` to create a name-mapping (i.e. dictionary) for BIDS organization based on the list of available protocols. !!! note diff --git a/docs/nipoppy/workflow/dicom_org.md b/docs/nipoppy/workflow/dicom_org.md index 80b86dcb..f2fa0d48 100644 --- a/docs/nipoppy/workflow/dicom_org.md +++ b/docs/nipoppy/workflow/dicom_org.md @@ -15,7 +15,7 @@ This is a dataset specific process and needs to be customized based on local sca ### Procedure -1. Run [`workflow/dicom_org/check_dicom_status.py`](https://github.com/neurodatascience/nipoppy/blob/main/workflow/dicom_org/check_dicom_status.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file. +1. Run [`workflow/dicom_org/check_dicom_status.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/check_dicom_status.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file. - To create the `doughnut.csv` for the first time, use the `--empty` argument. If processing has been done without updating `doughnut.csv`, use `--regenerate` to update it based on new files in the dataset. !!! note @@ -45,7 +45,7 @@ This is a dataset specific process and needs to be customized based on local sca It is **okay** for the participant directory to have messy internal subdir tree with DICOMs from multiple modalities. (See [data org schematic](data_org.md) for details). The run script will search and validate all available DICOM files automatically. -4. Run [`run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/workflow/dicom_org/run_dicom_org.py) to: +4. Run [`run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/run_dicom_org.py) to: - Search: Find all the DICOMs inside the participant directory. - Validate: Excludes certain individual dicom files that are invalid or contain scanner-derived data not compatible with BIDS conversion. - Symlink (default) or copy: Creates symlinks from `raw_dicoms/` to the `/dicom`, where all participant specific dicoms are in a flat list. The symlinks are relative so that they are preserved in containers. diff --git a/docs/nipoppy/workflow/proc_pipe/fmriprep.md b/docs/nipoppy/workflow/proc_pipe/fmriprep.md index 811633e5..68b2697b 100644 --- a/docs/nipoppy/workflow/proc_pipe/fmriprep.md +++ b/docs/nipoppy/workflow/proc_pipe/fmriprep.md @@ -16,9 +16,9 @@ Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS formatted datas ### Procedure - Ensure you have the appropriate fMRIPrep container listed in your `global_configs.json` -- Use [run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run fmriprep pipeline. +- Use [run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run fmriprep pipeline. - You can run "anatomical only" workflow by adding `--anat_only` flag -- (Optional) Copy+Rename [sample_bids_filter.json](https://github.com/neurodatascience/nipoppy/blob/main/workflow/proc_pipe/fmriprep/sample_bids_filter.json) to `bids_filter.json` in the code repo itself. Then edit `bids_filter.json` to filter certain modalities / acquisitions. This is common when you have multiple T1w acquisitions (e.g. Neuromelanin, SPIR etc.) for a given modality. +- (Optional) Copy+Rename [sample_bids_filter.json](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/sample_bids_filter.json) to `bids_filter.json` in the code repo itself. Then edit `bids_filter.json` to filter certain modalities / acquisitions. This is common when you have multiple T1w acquisitions (e.g. Neuromelanin, SPIR etc.) for a given modality. !!! note diff --git a/docs/nipoppy/workflow/proc_pipe/mriqc.md b/docs/nipoppy/workflow/proc_pipe/mriqc.md index 4b308681..1ed6c1fa 100644 --- a/docs/nipoppy/workflow/proc_pipe/mriqc.md +++ b/docs/nipoppy/workflow/proc_pipe/mriqc.md @@ -8,7 +8,7 @@ MRIQC processes the participants and produces image quality metrics from T1w, T2 ### [MRIQC](https://mriqc.readthedocs.io/en/latest/) -- Use [run_mriqc.py](https://github.com/neurodatascience/nipoppy/tree/main/workflow/proc_pipe/mriqc) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster +- Use [run_mriqc.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster ```bash python run_mriqc.py --global_config CONFIG.JSON --subject_id 001 --output_dir OUTPUT_DIR_PATH @@ -20,7 +20,7 @@ python run_mriqc.py --global_config CONFIG.JSON --subject_id 001 --output_dir OU - Mandatory: Pass in the absolute path to the output directory to `output_dir` !!! note - An example config is located [here](https://github.com/neurodatascience/nipoppy/blob/main/sample_global_configs.json) + An example config is located [here](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/sample_global_configs.json) > Sample cmd: ```bash @@ -35,7 +35,7 @@ python run_mriqc.py \ A run for a participant is considered successful when the participant's log file reads `Participant level finished successfully` ### Evaluate MRIQC Results -- Use [mriqc_tracker.py](https://github.com/neurodatascience/nipoppy/blob/main/trackers/mriqc_tracker.py) to determine how many subjects successfully passed through the MRIQC pipeline +- Use [mriqc_tracker.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/trackers/mriqc_tracker.py) to determine how many subjects successfully passed through the MRIQC pipeline - Mandatory: Pass in the subject directory as an argument - After a successful run of the script, a dictionary called tracker_configs is returned contained whether the subject passed through the pipeline successfully diff --git a/md_link_check_config.json b/md_link_check_config.json new file mode 100644 index 00000000..b1eac8d6 --- /dev/null +++ b/md_link_check_config.json @@ -0,0 +1,11 @@ +{ + "ignorePatterns": [ + {"pattern": "http://neurobagel.org/vocab/*"}, + {"pattern": "http://neurobagel.org/graph/"}, + {"pattern": "https://www.cognitiveatlas.org/task/id/"}, + {"pattern": "^../"}, + {"pattern": "localhost*"}, + {"pattern": "https://api.neurobagel.org/*"} + ], + "timeout": "60s" +} \ No newline at end of file