@@ -8,24 +8,36 @@ good fit for PyTorch/XLA as well.
8
8
9
9
## Bazel dependencies
10
10
11
- Tensorflow is a [ bazel external dependency] ( https://bazel.build/external/overview ) for PyTorch/XLA,
11
+ Openxla is a [ bazel external dependency] ( https://bazel.build/external/overview ) for PyTorch/XLA,
12
12
which can be seen in the ` WORKSPACE ` file:
13
13
14
14
` WORKSPACE `
15
15
16
16
``` python
17
17
http_archive(
18
- name = " org_tensorflow" ,
19
- strip_prefix = " tensorflow-f7759359f8420d3ca7b9fd19493f2a01bd47b4ef" ,
18
+ name = " xla" ,
19
+ patch_args = [
20
+ " -l" ,
21
+ " -p1" ,
22
+ ],
23
+ patch_tool = " patch" ,
24
+ patches = [
25
+ " //openxla_patches:gpu_nvml.diff" ,
26
+ " //openxla_patches:gpu_race_condition.diff" ,
27
+ " //openxla_patches:count_down.diff" ,
28
+ ],
29
+ strip_prefix = " xla-" + xla_hash,
20
30
urls = [
21
- " https://github.com/tensorflow/tensorflow /archive/f7759359f8420d3ca7b9fd19493f2a01bd47b4ef .tar.gz" ,
31
+ " https://github.com/openxla/xla /archive/" + xla_hash + " .tar.gz" ,
22
32
],
23
33
)
24
34
```
25
35
26
- TensorFlow pin can be updated by pointing this repository to a different
27
- revision. Patches may be added as needed. Bazel will resolve the
28
- dependency, prepare the code and patch it hermetically.
36
+ You can specify the revision of OpenXLA you want to use in the ` urls ` field
37
+ in the WORKSPACE file. PyTorch/XLA always builds with a deterministic
38
+ OpenXLA commit (` xla_hash ` ), aka "OpenXLA pin". Patches may be added as
39
+ needed. Bazel will resolve the dependency, prepare the code and patch it
40
+ hermetically.
29
41
30
42
For PyTorch, a different dependency mechanism is deployed because a
31
43
local [ PyTorch] ( https://github.com/pytorch/pytorch ) checkout is used,
0 commit comments