Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI] Implemented md_link_check CI #91

Merged
merged 9 commits into from
Sep 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 18 additions & 0 deletions .github/workflows/md_link_check.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
name: Check markdown links

on:
push:
branches: [main]
pull_request:
workflow_dispatch:

jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@main
- uses: gaurav-nelson/github-action-markdown-link-check@v1
rmanaem marked this conversation as resolved.
Show resolved Hide resolved
with:
use-quiet-mode: 'yes'
use-verbose-mode: 'yes'
config-file: 'md_link_check_config.json'
2 changes: 1 addition & 1 deletion docs/nipoppy/configs.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Nipoppy requires two global files for specifying local data/container paths and
### Global configs: `global_configs.json`
- This is a dataset-specific file and needs to be modified based on local configs and paths
- This file is used as an input to all workflow runscripts to read, process and track available data
- Copy, rename, and populate [sample_global_configs.json](https://github.com/neurodatascience/nipoppy/blob/main/sample_global_configs.json)
- Copy, rename, and populate [sample_global_configs.json](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/sample_global_configs.json)
- This file contains:
- Name of the Nipoppy dataset (`DATASET_NAME`, e.g., `PPMI`)
- Path to the Nipoppy dataset (`DATASET_ROOT`)
Expand Down
4 changes: 2 additions & 2 deletions docs/nipoppy/workflow/bids_conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Convert DICOMs to BIDS using [Heudiconv](https://heudiconv.readthedocs.io/en/lat
### Procedure

1. Ensure you have the appropriate HeuDiConv container listed in your `global_configs.json`
2. Use [run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`.
2. Use [run_bids_conv.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/run_bids_conv.py) to run HeuDiConv `stage_1` and `stage_2`.
- Run `stage_1` to generate a list of available protocols from the DICOM header. These protocols are listed in `<DATASET_ROOT>/bids/.heudiconv/<participant_id>/info/dicominfo_ses-<session_id>.tsv`

> Sample cmd:
Expand All @@ -32,7 +32,7 @@ python run_bids_conv.py \

If participants have multiple sessions (or visits), these need to be converted separately and combined post-hoc to avoid Heudiconv errors.

3. Copy+Rename [sample_heuristic.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/bids_conv/sample_heuristic.py) to `heuristic.py` in the code repo itself. Then edit `./heuristic.py` to create a name-mapping (i.e. dictionary) for BIDS organization based on the list of available protocols.
3. Copy+Rename [sample_heuristic.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/bids_conv/sample_heuristic.py) to `heuristic.py` in the code repo itself. Then edit `./heuristic.py` to create a name-mapping (i.e. dictionary) for BIDS organization based on the list of available protocols.

!!! note

Expand Down
4 changes: 2 additions & 2 deletions docs/nipoppy/workflow/dicom_org.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This is a dataset specific process and needs to be customized based on local sca

### Procedure

1. Run [`workflow/dicom_org/check_dicom_status.py`](https://github.com/neurodatascience/nipoppy/blob/main/workflow/dicom_org/check_dicom_status.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file.
1. Run [`workflow/dicom_org/check_dicom_status.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/check_dicom_status.py) to update `doughnut.csv` based on the manifest. It will add new rows for any subject-session pair not already in the file.
- To create the `doughnut.csv` for the first time, use the `--empty` argument. If processing has been done without updating `doughnut.csv`, use `--regenerate` to update it based on new files in the dataset.

!!! note
Expand Down Expand Up @@ -45,7 +45,7 @@ This is a dataset specific process and needs to be customized based on local sca
It is **okay** for the participant directory to have messy internal subdir tree with DICOMs from multiple modalities. (See [data org schematic](data_org.md) for details). The run script will search and validate all available DICOM files automatically.


4. Run [`run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/workflow/dicom_org/run_dicom_org.py) to:
4. Run [`run_dicom_org.py`](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/dicom_org/run_dicom_org.py) to:
- Search: Find all the DICOMs inside the participant directory.
- Validate: Excludes certain individual dicom files that are invalid or contain scanner-derived data not compatible with BIDS conversion.
- Symlink (default) or copy: Creates symlinks from `raw_dicoms/` to the `<DATASET_ROOT>/dicom`, where all participant specific dicoms are in a flat list. The symlinks are relative so that they are preserved in containers.
Expand Down
4 changes: 2 additions & 2 deletions docs/nipoppy/workflow/proc_pipe/fmriprep.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Run [fMRIPrep](https://fmriprep.org/en/stable/) pipeline on BIDS formatted datas
### Procedure

- Ensure you have the appropriate fMRIPrep container listed in your `global_configs.json`
- Use [run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run fmriprep pipeline.
- Use [run_fmriprep.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/run_fmriprep.py) script to run fmriprep pipeline.
- You can run "anatomical only" workflow by adding `--anat_only` flag
- (Optional) Copy+Rename [sample_bids_filter.json](https://github.com/neurodatascience/nipoppy/blob/main/workflow/proc_pipe/fmriprep/sample_bids_filter.json) to `bids_filter.json` in the code repo itself. Then edit `bids_filter.json` to filter certain modalities / acquisitions. This is common when you have multiple T1w acquisitions (e.g. Neuromelanin, SPIR etc.) for a given modality.
- (Optional) Copy+Rename [sample_bids_filter.json](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/fmriprep/sample_bids_filter.json) to `bids_filter.json` in the code repo itself. Then edit `bids_filter.json` to filter certain modalities / acquisitions. This is common when you have multiple T1w acquisitions (e.g. Neuromelanin, SPIR etc.) for a given modality.

!!! note

Expand Down
6 changes: 3 additions & 3 deletions docs/nipoppy/workflow/proc_pipe/mriqc.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ MRIQC processes the participants and produces image quality metrics from T1w, T2


### [MRIQC](https://mriqc.readthedocs.io/en/latest/)
- Use [run_mriqc.py](https://github.com/neurodatascience/nipoppy/tree/main/workflow/proc_pipe/mriqc) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster
- Use [run_mriqc.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/workflow/proc_pipe/mriqc/run_mriqc.py) to run MRIQC pipeline directly or wrap the script in an SGE/Slurm script to run on cluster

```bash
python run_mriqc.py --global_config CONFIG.JSON --subject_id 001 --output_dir OUTPUT_DIR_PATH
Expand All @@ -20,7 +20,7 @@ python run_mriqc.py --global_config CONFIG.JSON --subject_id 001 --output_dir OU
- Mandatory: Pass in the absolute path to the output directory to `output_dir`

!!! note
An example config is located [here](https://github.com/neurodatascience/nipoppy/blob/main/sample_global_configs.json)
An example config is located [here](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/sample_global_configs.json)

> Sample cmd:
```bash
Expand All @@ -35,7 +35,7 @@ python run_mriqc.py \
A run for a participant is considered successful when the participant's log file reads `Participant level finished successfully`

### Evaluate MRIQC Results
- Use [mriqc_tracker.py](https://github.com/neurodatascience/nipoppy/blob/main/trackers/mriqc_tracker.py) to determine how many subjects successfully passed through the MRIQC pipeline
- Use [mriqc_tracker.py](https://github.com/neurodatascience/nipoppy/blob/main/nipoppy/trackers/mriqc_tracker.py) to determine how many subjects successfully passed through the MRIQC pipeline
- Mandatory: Pass in the subject directory as an argument
- After a successful run of the script, a dictionary called tracker_configs is returned contained whether the subject passed through the pipeline successfully

Expand Down
11 changes: 11 additions & 0 deletions md_link_check_config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
{
"ignorePatterns": [
{"pattern": "http://neurobagel.org/vocab/*"},
{"pattern": "http://neurobagel.org/graph/"},
{"pattern": "https://www.cognitiveatlas.org/task/id/"},
{"pattern": "^../"},
{"pattern": "localhost*"},
{"pattern": "https://api.neurobagel.org/*"}
],
"timeout": "60s"
}