Skip to content

Commit 7bccc3f

Browse files
jingxu10tye1
andauthored
docs bug fix (#2175)
* docs bug fix --------- Co-authored-by: Ye Ting <[email protected]>
1 parent 2798b43 commit 7bccc3f

File tree

3 files changed

+28
-32
lines changed

3 files changed

+28
-32
lines changed

docs/tutorials/getting_started.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
Prebuilt wheel files are released for multiple Python versions. You can install them simply with the following pip command.
66

77
```bash
8-
python -m pip install torch==1.13.0a0 torchvision==0.14.0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
8+
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
99
```
1010

1111
You can run a simple sanity test to double confirm if the correct version is installed, and if the software stack can get correct hardware information onboard your system.

docs/tutorials/installation.md

+26-30
Original file line numberDiff line numberDiff line change
@@ -40,17 +40,24 @@ Verified Hardware Platforms:
4040
Please refer to [Install oneAPI Base Toolkit Packages](https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html#base-kit).
4141

4242
Need to install components of Intel® oneAPI Base Toolkit:
43-
- Intel® oneAPI DPC++ Compiler
44-
- Intel® oneAPI Math Kernel Library (oneMKL)
43+
- Intel® oneAPI DPC++ Compiler (`DPCPP_ROOT` as its installation path)
44+
- Intel® oneAPI Math Kernel Library (oneMKL) (`MKL_ROOT` as its installation path)
4545

46-
Default installation location *{ONEAPI_ROOT}* is `/opt/intel/oneapi` for root account, `${HOME}/intel/oneapi` for other accounts.
46+
Default installation location *{ONEAPI_ROOT}* is `/opt/intel/oneapi` for root account, `${HOME}/intel/oneapi` for other accounts. Generally, `DPCPP_ROOT` is `{ONEAPI_ROOT}/compiler/latest`, `MKL_ROOT` is `{ONEAPI_ROOT}/mkl/latest`.
4747

4848
**_NOTE:_** You need to activate oneAPI environment when using Intel® Extension for PyTorch\* on Intel GPU.
4949

5050
```bash
5151
source {ONEAPI_ROOT}/setvars.sh
5252
```
5353

54+
**_NOTE:_** You need to activate ONLY DPC++ compiler and oneMKL environment when compiling Intel® Extension for PyTorch\* from source on Intel GPU.
55+
56+
```bash
57+
source {DPCPP_ROOT}/env/vars.sh
58+
source {MKL_ROOT}/env/vars.sh
59+
```
60+
5461
## PyTorch-Intel® Extension for PyTorch\* Version Mapping
5562

5663
Intel® Extension for PyTorch\* has to work with a corresponding version of PyTorch. Here are the PyTorch versions that we support and the mapping relationship:
@@ -69,41 +76,27 @@ Prebuilt wheel files availability matrix for Python versions:
6976
| 1.13.10+xpu | | ✔️ | ✔️ | ✔️ | ✔️ |
7077
| 1.10.200+gpu | ✔️ | ✔️ | ✔️ | ✔️ | |
7178

72-
**Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9.
73-
74-
**Note:** Wheel files supporting Intel® Distribution for Python\* starts from 1.13.
79+
---
7580

76-
### Repositories for prebuilt wheel files
81+
Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\* are released in separate repositories.
7782

78-
Prebuilt wheel files for generic Python\* and Intel® Distribution for Python\* are released in separate repositories. Replace the place holder `<REPO_URL>` in installation commands with a real URL below.
79-
80-
```
81-
# Generic Python
82-
REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu
83+
```bash
84+
# General Python*
85+
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu
8386

8487
# Intel® Distribution for Python*
85-
REPO_URL: https://developer.intel.com/ipex-whl-stable-xpu-idp
88+
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 intel_extension_for_pytorch==1.13.10+xpu -f https://developer.intel.com/ipex-whl-stable-xpu-idp
8689
```
8790

88-
### Install PyTorch and TorchVision
89-
90-
```bash
91-
python -m pip install torch==1.13.0a0 torchvision==0.14.1a0 -f <REPO_URL>
92-
```
93-
94-
**Note:** Installation of TorchVision is optional.
91+
**Note:** Wheel files for Intel® Distribution for Python\* only supports Python 3.9. The support starts from 1.13.10+xpu.
9592

9693
**Note:** Please install Numpy 1.22.3 under Intel® Distribution for Python\*.
9794

98-
### Install torchaudio (Optional)
99-
100-
Intel® Extension for PyTorch\* doesn't depend on torchaudio. If you need TorchAudio, please follow the [instructions](https://github.com/pytorch/audio/tree/v0.13.0#from-source) to compile it from source. According to torchaudio-pytorch dependency table, torchaudio 0.13.0 is recommended.
95+
**Note:** Installation of TorchVision is optional.
10196

102-
### Install Intel® Extension for PyTorch\*
97+
**Note:** Since DPC++ compiler doesn't support old [C++ ABI](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html) (`_GLIBCXX_USE_CXX11_ABI=0`), ecosystem packages, including PyTorch and TorchVision, need to be compiled with the new C++ ABI (`_GLIBCXX_USE_CXX11_ABI=1`).
10398

104-
```bash
105-
python -m pip install intel_extension_for_pytorch==1.13.10+xpu -f <REPO_URL>
106-
```
99+
**Note:** If you need TorchAudio, please follow the [instructions](https://github.com/pytorch/audio/tree/v0.13.0#from-source) to compile it from source. According to torchaudio-pytorch dependency table, torchaudio 0.13.0 is recommended.
107100

108101
## Install via compiling from source
109102

@@ -126,15 +119,16 @@ $ cd pytorch
126119
$ git apply ${intel_extension_for_pytorch_directory}/torch_patches/*.patch
127120
$ git submodule sync
128121
$ git submodule update --init --recursive
122+
$ conda install numpy ninja cmake
129123
$ pip install -r requirements.txt
130-
$ source {ONEAPI_ROOT}/setvars.sh
124+
$ export GLIBCXX_USE_CXX11_ABI=1
131125
$ python setup.py bdist_wheel
132126
$ pip install dist/*.whl
133127
```
134128

135129
### Configure the AOT (Optional)
136130

137-
Please refer to [AOT documentation](./AOT.md) for how to configure `USE_AOT_DEVLIST`. Without configuring AOT, the start-up time for processes using Intel® Extension for PyTorch\* will be high, so this step is important.
131+
Please refer to [AOT documentation](./AOT.md) for how to configure `USE_AOT_DEVLIST`. Without configuring AOT, the start-up time for processes using Intel® Extension for PyTorch\* will be long, so this step is important.
138132

139133
### Install Intel® Extension for PyTorch\*:
140134

@@ -143,7 +137,9 @@ $ cd intel-extension-for-pytorch
143137
$ git submodule sync
144138
$ git submodule update --init --recursive
145139
$ pip install -r requirements.txt
146-
$ source {ONEAPI_ROOT}/setvars.sh # If you have sourced the oneAPI environment when compiling PyTorch, please skip this step.
140+
$ source {DPCPP_ROOT}/env/vars.sh
141+
$ source {MKL_ROOT}/env/vars.sh
142+
$ export USE_AOT_DEVLIST="..." # Set values accordingly
147143
$ python setup.py bdist_wheel
148144
$ pip install dist/*.whl
149145
```

docs/tutorials/performance_tuning/known_issues.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Known Issues
1313
ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
1414
```
1515

16-
DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Update PyTorch\* CMAKE file to set `_GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
16+
DPC++ does not support `_GLIBCXX_USE_CXX11_ABI=0`, Intel® Extension for PyTorch\* is always compiled with `_GLIBCXX_USE_CXX11_ABI=1`. This symbol undefined issue appears when PyTorch\* is compiled with `_GLIBCXX_USE_CXX11_ABI=0`. Pass `export GLIBCXX_USE_CXX11_ABI=1` and compile PyTorch\* with particular compiler which supports `_GLIBCXX_USE_CXX11_ABI=1`. We recommend using prebuilt wheels in [download server](https://developer.intel.com/ipex-whl-stable-xpu) to avoid this issue.
1717

1818
- Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
1919

0 commit comments

Comments
 (0)