You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system.
Copy file name to clipboardExpand all lines: docs/tutorials/installation.md
+26-30
Original file line number
Diff line number
Diff line change
@@ -40,17 +40,24 @@ Verified Hardware Platforms:
40
40
Please refer to [Install oneAPI Base Toolkit Packages](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit).
41
41
42
42
Need to install components of Intel® oneAPI Base Toolkit:
43
-
- Intel® oneAPI DPC++ Compiler
44
-
- Intel® oneAPI Math Kernel Library (oneMKL)
43
+
- Intel® oneAPI DPC++ Compiler (`DPCPP_ROOT` as its installation path)
44
+
- Intel® oneAPI Math Kernel Library (oneMKL) (`MKL_ROOT` as its installation path)
45
45
46
-
Default installation location *{ONEAPI_ROOT}* is `/opt/intel/oneapi` for root account, `${HOME}/intel/oneapi` for other accounts.
46
+
Default installation location *{ONEAPI_ROOT}* is `/opt/intel/oneapi` for root account, `${HOME}/intel/oneapi` for other accounts. Generally, `DPCPP_ROOT` is `{ONEAPI_ROOT}/compiler/latest`, `MKL_ROOT` is `{ONEAPI_ROOT}/mkl/latest`.
47
47
48
48
**_NOTE:_** You need to activate oneAPI environment when using Intel® Extension for PyTorch\* on Intel GPU.
49
49
50
50
```bash
51
51
source {ONEAPI_ROOT}/setvars.sh
52
52
```
53
53
54
+
**_NOTE:_** You need to activate ONLY DPC++ compiler and oneMKL environment when compiling Intel® Extension for PyTorch\* from source on Intel GPU.
55
+
56
+
```bash
57
+
source {DPCPP_ROOT}/env/vars.sh
58
+
source {MKL_ROOT}/env/vars.sh
59
+
```
60
+
54
61
## PyTorch-Intel® Extension for PyTorch\* Version Mapping
55
62
56
63
Intel® Extension for PyTorch\* has to work with a corresponding version of PyTorch. Here are the PyTorch versions that we support and the mapping relationship:
**Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9.
73
-
74
-
**Note:** Wheel files supporting Intel® Distribution for Python\* starts from 1.13.
79
+
---
75
80
76
-
### Repositories for prebuilt wheel files
81
+
Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\* are released in separate repositories.
77
82
78
-
Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\* are released in separate repositories. Replace the place holder `<REPO_URL>` in installation commands with a real URL below.
**Note:** Installation of TorchVision is optional.
91
+
**Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9. The support starts from 1.13.10+xpu.
95
92
96
93
**Note:** Please install Numpy 1.22.3 under Intel® Distribution for Python\*.
97
94
98
-
### Install torchaudio (Optional)
99
-
100
-
Intel® Extension for PyTorch\* doesn't depend on torchaudio. If you need TorchAudio, please follow the [instructions](https://github.com/pytorch/audio/tree/v0.13.0#from-source) to compile it from source. According to torchaudio-pytorch dependency table, torchaudio 0.13.0 is recommended.
95
+
**Note:** Installation of TorchVision is optional.
101
96
102
-
### Install Intel® Extension for PyTorch\*
97
+
**Note:** Since DPC++ compiler doesn't support old [C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) (`_GLIBCXX_USE_CXX11_ABI=0`), ecosystem packages, including PyTorch and TorchVision, need to be compiled with the new C++ ABI (`_GLIBCXX_USE_CXX11_ABI=1`).
**Note:** If you need TorchAudio, please follow the [instructions](https://github.com/pytorch/audio/tree/v0.13.0#from-source) to compile it from source. According to torchaudio-pytorch dependency table, torchaudio 0.13.0 is recommended.
Please refer to [AOT documentation](./AOT.md) for how to configure `USE_AOT_DEVLIST`. Without configuring AOT, the start-up time for processes using Intel® Extension for PyTorch\* will be high, so this step is important.
131
+
Please refer to [AOT documentation](./AOT.md) for how to configure `USE_AOT_DEVLIST`. Without configuring AOT, the start-up time for processes using Intel® Extension for PyTorch\* will be long, so this step is important.
138
132
139
133
### Install Intel® Extension for PyTorch\*:
140
134
@@ -143,7 +137,9 @@ $ cd intel-extension-for-pytorch
143
137
$ git submodule sync
144
138
$ git submodule update --init --recursive
145
139
$ pip install -r requirements.txt
146
-
$ source {ONEAPI_ROOT}/setvars.sh # If you have sourced the oneAPI environment when compiling PyTorch, please skip this step.
140
+
$ source {DPCPP_ROOT}/env/vars.sh
141
+
$ source {MKL_ROOT}/env/vars.sh
142
+
$ export USE_AOT_DEVLIST="..."# Set values accordingly
DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension forPyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Update PyTorch\* CMAKE file to set `_GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheelsin [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
16
+
DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension forPyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheelsin [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
17
17
18
18
- Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
0 commit comments