Skip to content

Commit 4cc00f4

Browse files
committed
update README
1 parent 6759280 commit 4cc00f4

File tree

1 file changed

+9
-8
lines changed

1 file changed

+9
-8
lines changed

README.md

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,11 @@ To install, simply run `pip install .` in the top-level repository directory.
88
## Running the examples
99
This repository contains several example usage scripts.
1010

11-
There are two example scripts using the HARDI dataset, `run_dipy_cpu_hardi.py` and `run_dipy_gpu_hardi.py`, which run on CPU and GPU respectively.
11+
The script `run_gpu_streamlines.py` demonstrates how to run any diffusion MRI dataset on the GPU. It can also run on the CPU for reference, if the argument `--device=cpu` is used. If not data is passed, it will donaload and use the HARDI dataset.
1212

1313
To run the baseline CPU example on a random set of 1000 seeds, this is the command and example output:
1414
```
15-
$ python run_dipy_cpu_hardi.py --chunk-size 100000 --output-prefix small --nseeds 1000
15+
$ python run_gpu_streamlines.py --device=cpu --output-prefix small --nseeds 1000
1616
parsing arguments
1717
Fitting Tensor
1818
Computing anisotropy measures (FA,MD,RGB)
@@ -30,7 +30,7 @@ Streamline generation total time: 6.9404990673065186 sec
3030

3131
To run the same case on a single GPU, this is the command and example output:
3232
```
33-
$ python run_dipy_gpu_hardi.py --chunk-size 100000 --output-prefix small --nseeds 1000 --ngpus 1
33+
$ python run_gpu_streamlines.py --output-prefix small --nseeds 1000 --ngpus 1
3434
parsing arguments
3535
Fitting Tensor
3636
Computing anisotropy measures (FA,MD,RGB)
@@ -48,11 +48,13 @@ Streamline generation total time: 0.3834989070892334 sec
4848
Destroy GPUTracker...
4949
```
5050

51+
Note that if you experience memory errors, you can adjust the `--chunk-size` flag.
52+
5153
To run on more seeds, we suggest enabling the `--use-fast-write` flag in the GPU script to not get bottlenecked by writing files. Here is a comparison running on 500K seeds on 1 GPU with and without this flag:
5254

5355
Without `--use-fast-write`:
5456
```
55-
$ python run_dipy_gpu_hardi.py --chunk-size 100000 --output-prefix small --nseeds 500000 --ngpus 1
57+
$ python run_gpu_streamlines.py --output-prefix small --nseeds 500000 --ngpus 1
5658
parsing arguments
5759
Fitting Tensor
5860
Computing anisotropy measures (FA,MD,RGB)
@@ -80,7 +82,7 @@ Destroy GPUTracker...
8082

8183
With `--use-fast-write`:
8284
```
83-
$ python run_dipy_gpu_hardi.py --chunk-size 100000 --output-prefix small --nseeds 500000 --ngpus 1 --use-fast-write
85+
$ python run_gpu_streamlines.py --output-prefix small --nseeds 500000 --ngpus 1 --use-fast-write
8486
parsing arguments
8587
Fitting Tensor
8688
Computing anisotropy measures (FA,MD,RGB)
@@ -120,11 +122,10 @@ $ docker pull docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest
120122
4. Run the code, mounting the current directory into the container for easy result retrieval:
121123
```
122124
$ docker run --gpus=all -v ${PWD}:/opt/exec/output:rw -it docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest \
123-
python run_dipy_gpu_hardi.py --chunk-size 100000 --ngpus 1 --output-prefix output/hardi_gpu_full --use-fast-write
125+
python run_gpu_streamlines.py --ngpus 1 --output-prefix output/hardi_gpu_full --use-fast-write
124126
```
125127
5. The code produces a number of independent track files (one per processed "chunk"), but we have provided a merge script to combine them into a single trk file. To merge files, run:
126128
```
127129
$ docker run --gpus=all -v ${PWD}:/opt/exec/output:rw -it docker.pkg.github.com/dipy/gpustreamlines/gpustreamlines:latest \
128130
./merge_trk.sh -o output/hardi_tracks.trk output/hardi_gpu_full*
129-
```
130-
131+
```

0 commit comments

Comments
 (0)