You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
pip install .# add -e for an editable installation
60
60
```
61
61
@@ -76,7 +76,7 @@ Then,
76
76
```
77
77
git clone https://github.com/pytorch/benchmark
78
78
cd benchmark
79
-
python install.py
79
+
python3 install.py
80
80
```
81
81
82
82
### Notes
@@ -111,11 +111,11 @@ There are multiple ways for running the model benchmarks.
111
111
In each model repo, the assumption is that the user would already have all of the torch family of packages installed (torch, torchvision, torchaudio...) but it installs the rest of the dependencies for the model.
112
112
113
113
### Using `test.py`
114
-
`python test.py` will execute the APIs for each model, as a sanity check. For benchmarking, use `test_bench.py`. It is based on unittest, and supports filtering via CLI.
114
+
`python3 test.py` will execute the APIs for each model, as a sanity check. For benchmarking, use `test_bench.py`. It is based on unittest, and supports filtering via CLI.
115
115
116
116
For instance, to run the BERT model on CPU for the train execution mode:
117
117
```
118
-
python test.py -k "test_BERT_pytorch_train_cpu"
118
+
python3 test.py -k "test_BERT_pytorch_train_cpu"
119
119
```
120
120
121
121
The test name follows the following pattern:
@@ -147,7 +147,7 @@ The `userbenchmark` allows you to develop your customized benchmarks with TorchB
147
147
Sometimes you may want to just run train or eval on a particular model, e.g. for debugging or profiling. Rather than relying on __main__ implementations inside each model, `run.py` provides a lightweight CLI for this purpose, building on top of the standard BenchmarkModel API.
0 commit comments