Skip to content

Commit

Permalink
Add "build from docker" section (#602)
Browse files Browse the repository at this point in the history
* add build from docker section

* update

* install python package

* update

* update

* update
  • Loading branch information
lvhan028 authored Oct 25, 2023
1 parent 96f1b8e commit 7283781
Show file tree
Hide file tree
Showing 3 changed files with 124 additions and 20 deletions.
2 changes: 1 addition & 1 deletion builder/manywheel/entrypoint_build.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ source /opt/conda/bin/activate
conda activate $PYTHON_VERSION

cd lmdeploy
mkdir build && cd build
mkdir -p build && cd build && rm -rf *
bash ../generate.sh
make -j$(nproc) && make install
if [ $? != 0 ]; then
Expand Down
74 changes: 63 additions & 11 deletions docs/en/build.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,79 @@
## Build from source
# Build from source

- make sure local gcc version no less than 9, which can be conformed by `gcc --version`.
- install packages for compiling and running:
LMDeploy provides prebuilt package that can be easily installed by `pip install lmdeploy`.

If you have requests to build lmdeploy from source, please clone lmdeploy repository from GitHub, and follow instructions in next sections

```shell
git clone --depth=1 https://github.com/InternLM/lmdeploy
```

## Build in Docker (recommended)

We highly advise using the provided docker image for lmdeploy build to circumvent complex environment setup.

The docker image is `openmmlab/lmdeploy-builder:cuda11.8`. Make sure that docker is installed before using this image.

In the root directory of the lmdeploy source code, please run the following command:

```shell
cd lmdeploy # the home folder of lmdeploy source code
bash builder/manywheel/build_all_wheel.sh
```

All the wheel files for lmdeploy under py3.8 - py3.11 will be found in the `builder/manywheel/cuda11.8_dist` directory, such as,

```text
builder/manywheel/cuda11.8_dist/
├── lmdeploy-0.0.12-cp310-cp310-manylinux2014_x86_64.whl
├── lmdeploy-0.0.12-cp311-cp311-manylinux2014_x86_64.whl
├── lmdeploy-0.0.12-cp38-cp38-manylinux2014_x86_64.whl
└── lmdeploy-0.0.12-cp39-cp39-manylinux2014_x86_64.whl
```

If the wheel file for a specific Python version is required, such as py3.8, please execute:

```shell
bash builder/manywheel/build_wheel.sh py38 manylinux2014_x86_64 cuda11.8 cuda11.8_dist
```

And the wheel file will be found in the `builder/manywheel/cuda11.8_dist` directory.

You can use `pip install` to install the wheel file that matches the Python version on your host machine.

## Build in localhost (optional)

Firstly, please make sure gcc version is no less than 9, which can be conformed by `gcc --version`.

Then, follow the steps below to set up the compilation environment:

- install the dependent packages:
```shell
pip install -r requirements.txt
apt-get install rapidjson-dev
```
- install [nccl](https://docs.nvidia.com/deeplearning/nccl/install-guide/index.html), set environment variables:
- install [nccl](https://docs.nvidia.com/deeplearning/nccl/install-guide/index.html), and set environment variables:
```shell
export NCCL_ROOT_DIR=/path/to/nccl/build
export NCCL_LIBRARIES=/path/to/nccl/build/lib
```
- install rapidjson
- install openmpi, installing from source is recommended.
- install openmpi from source:
```shell
wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.5.tar.gz
tar -xzf openmpi-*.tar.gz && cd openmpi-*
./configure --with-cuda
make -j$(nproc)
make install
tar xf openmpi-4.1.5.tar.gz
cd openmpi-4.1.5
./configure
make -j$(nproc) && make install
```
- build and install lmdeploy:
- build and install lmdeploy libraries:
```shell
cd lmdeploy # the home folder of lmdeploy
mkdir build && cd build
sh ../generate.sh
make -j$(nproc) && make install
```
- install lmdeploy python package:
```shell
cd ..
pip install -e .
```
68 changes: 60 additions & 8 deletions docs/zh_cn/build.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,79 @@
### 源码安装
# 编译和安装

LMDeploy 提供了预编译包,可以很方便的通过 `pip install lmdeploy` 安装和使用。

如果有源码编译的需求,请先下载 lmdeploy 源码:

```shell
git clone --depth=1 https://github.com/InternLM/lmdeploy
```

然后,参考以下章节编译和安装。

## 在 docker 内编译安装(强烈推荐)

LMDeploy 提供了编译镜像 `openmmlab/lmdeploy-builder:cuda11.8`。使用之前,请确保 docker 已安装。

在 lmdeploy 源码的根目录下,运行以下命令:

```shell
cd lmdeploy # lmdeploy 源码根目录
bash builder/manywheel/build_all_wheel.sh
```

即可在 `builder/manywheel/cuda11.8_dist` 文件夹下,得到 lmdeploy 在 py3.8 - py3.11 下所有的 wheel 文件。比如,

```text
builder/manywheel/cuda11.8_dist/
├── lmdeploy-0.0.12-cp310-cp310-manylinux2014_x86_64.whl
├── lmdeploy-0.0.12-cp311-cp311-manylinux2014_x86_64.whl
├── lmdeploy-0.0.12-cp38-cp38-manylinux2014_x86_64.whl
└── lmdeploy-0.0.12-cp39-cp39-manylinux2014_x86_64.whl
```

如果需要固定 python 版本的 wheel 文件,比如 py3.8,可以执行:

```shell
bash builder/manywheel/build_wheel.sh py38 manylinux2014_x86_64 cuda11.8 cuda11.8_dist
```

wheel 文件存放在目录 `builder/manywheel/cuda11.8_dist` 下。

在宿主机上,通过 `pip install` 安装和宿主机python版本一致的 wheel 文件,即完成 lmdeploy 整个编译安装过程。

## 在物理机上编译安装(可选)

首先,请确保物理机环境的 gcc 版本不低于 9,可以通过`gcc --version`确认。

然后,按如下步骤,配置编译环境:

- 确保物理机环境的 gcc 版本不低于 9,可以通过`gcc --version`确认。
- 安装编译和运行依赖包:
```shell
pip install -r requirements.txt
apt-get install rapidjson-dev
```
- 安装 [nccl](https://docs.nvidia.com/deeplearning/nccl/install-guide/index.html),设置环境变量
```shell
export NCCL_ROOT_DIR=/path/to/nccl/build
export NCCL_LIBRARIES=/path/to/nccl/build/lib
```
- rapidjson 安装
- openmpi 安装, 推荐从源码安装:
- 源码编译安装 openmpi:
```shell
wget https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-4.1.5.tar.gz
tar -xzf openmpi-*.tar.gz && cd openmpi-*
./configure --with-cuda
make -j$(nproc)
make install
tar xf openmpi-4.1.5.tar.gz
cd openmpi-4.1.5
./configure
make -j$(nproc) && make install
```
- lmdeploy 编译安装:
```shell
cd lmdeploy # lmdeploy 源码的根目录
mkdir build && cd build
sh ../generate.sh
make -j$(nproc) && make install
```
- 安装 lmdeploy python package:
```shell
cd ..
pip install -e .
```

0 comments on commit 7283781

Please sign in to comment.