Skip to content

Commit

Permalink
[Serving] add fastdeployserver dockerfile for cuda11.2 (PaddlePaddle#…
Browse files Browse the repository at this point in the history
…1169)

* add fastdeployserver dockerfile for cuda11.2

* add docks

* update

---------

Co-authored-by: heliqi <[email protected]>
  • Loading branch information
rainyfly and heliqi authored Jan 30, 2023
1 parent 84d41dc commit 8c651f9
Show file tree
Hide file tree
Showing 4 changed files with 164 additions and 0 deletions.
83 changes: 83 additions & 0 deletions serving/Dockerfile_CUDA_11_2
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


FROM nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04

ARG http_proxy
ARG https_proxy

#Install the build dependencies
RUN apt-get update && DEBIAN_FRONTEND=noninteractive TZ=Asia/Shanghai apt-get install -y --no-install-recommends curl wget vim git patchelf python3-dev python3-pip \
python3-setuptools build-essential libgl1-mesa-glx libglib2.0-dev ca-certificates libb64-dev datacenter-gpu-manager \
libssl-dev zlib1g-dev rapidjson-dev libboost-dev libre2-dev librdmacm-dev libnuma-dev libarchive-dev unzip && \
apt-get clean && rm -rf /var/lib/apt/lists/*

RUN ln -s /usr/bin/python3 /usr/bin/python;
RUN pip3 install --upgrade pip

# install cmake
WORKDIR /home
RUN wget -q https://github.com/Kitware/CMake/releases/download/v3.18.6/cmake-3.18.6-Linux-x86_64.tar.gz && tar -zxvf cmake-3.18.6-Linux-x86_64.tar.gz
ENV PATH=/home/cmake-3.18.6-Linux-x86_64/bin:$PATH


#install triton
ENV TAG=r21.10
RUN git clone https://github.com/triton-inference-server/server.git -b $TAG && \
cd server && \
mkdir -p build/tritonserver/install && \
python3 build.py \
--build-dir `pwd`/build \
--no-container-build \
--backend=ensemble \
--enable-gpu \
--endpoint=grpc \
--endpoint=http \
--enable-stats \
--enable-tracing \
--enable-logging \
--enable-stats \
--enable-metrics \
--enable-gpu-metrics \
--cmake-dir `pwd`/build \
--repo-tag=common:$TAG \
--repo-tag=core:$TAG \
--repo-tag=backend:$TAG \
--repo-tag=thirdparty:$TAG \
--backend=python:$TAG

COPY python/dist/*.whl /opt/fastdeploy/
RUN python3 -m pip install /opt/fastdeploy/*.whl \
&& rm -rf /opt/fastdeploy/*.whl


# compile triton-inference-server/server,copy tritonserver and python backend into image
# triton server
RUN mkdir -p /opt/tritonserver && cp -r /home/server/build/tritonserver/install/* /opt/tritonserver
# python backend
RUN mkdir -p /opt/tritonserver/backends/python && cp -r /home/server/build/python/install/backends/python /opt/tritonserver/backends/

# copy compiled fastdeploy backend into image
COPY serving/build/libtriton_fastdeploy.so /opt/tritonserver/backends/fastdeploy/

# rename tritonserver to fastdeployserver
RUN mv /opt/tritonserver/bin/tritonserver /opt/tritonserver/bin/fastdeployserver

# copy compiled fastdeploy_install into image
COPY build/fastdeploy_install/* /opt/fastdeploy/

# Set environment variable
ENV LD_LIBRARY_PATH="/opt/fastdeploy/lib:/opt/fastdeploy/third_libs/install/onnxruntime/lib:/opt/fastdeploy/third_libs/install/paddle2onnx/lib:/opt/fastdeploy/third_libs/install/paddle_inference/paddle/lib:/opt/fastdeploy/third_libs/install/openvino/runtime/lib/:/opt/fastdeploy/third_libs/install/tensorrt/lib/:/opt/fastdeploy/third_libs/install/opencv/lib64/:$LD_LIBRARY_PATH"
ENV PATH="/opt/tritonserver/bin:$PATH"
11 changes: 11 additions & 0 deletions serving/docs/EN/compile-en.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,17 @@ cd ../
docker build -t paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 -f serving/Dockerfile .
```

For example, create an GPU image based on FastDeploy v1.0.0 and ubuntu 20.04,cuda11.2 environment
```
# Enter the serving directory and execute the script to compile the FastDeploy and serving backend
cd serving
bash scripts/build_fd_cuda_11_2.sh
# Exit to the FastDeploy home directory and create the image
cd ../
docker build -t paddlepaddle/fastdeploy:1.0.0-gpu-cuda11.2-trt8.4-20.04 -f serving/Dockerfile_CUDA_11_2 .
```

## CPU Image

```shell
Expand Down
11 changes: 11 additions & 0 deletions serving/docs/zh_CN/compile.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,17 @@ cd ../
docker build -t paddlepaddle/fastdeploy:x.y.z-gpu-cuda11.4-trt8.4-21.10 -f serving/Dockerfile .
```

比如在ubuntu 20.04,cuda11.2环境下制作基于FastDeploy v1.0.0的GPU镜像
```
# 进入serving目录执行脚本编译fastdeploy和服务化的backend
cd serving
bash scripts/build_fd_cuda_11_2.sh
# 退出到FastDeploy主目录,制作镜像
cd ../
docker build -t paddlepaddle/fastdeploy:1.0.0-gpu-cuda11.2-trt8.4-20.04 -f serving/Dockerfile_CUDA_11_2 .
```

## 制作CPU镜像

```
Expand Down
59 changes: 59 additions & 0 deletions serving/scripts/build_fd_cuda_11_2.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
#!/usr/bin/env bash
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

if [ ! -d "./cmake-3.18.6-Linux-x86_64/" ]; then
wget https://github.com/Kitware/CMake/releases/download/v3.18.6/cmake-3.18.6-Linux-x86_64.tar.gz
tar -zxvf cmake-3.18.6-Linux-x86_64.tar.gz
rm -rf cmake-3.18.6-Linux-x86_64.tar.gz
fi

if [ ! -d "./TensorRT-8.4.1.5/" ]; then
wget https://fastdeploy.bj.bcebos.com/third_libs/TensorRT-8.4.1.5.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz
tar -zxvf TensorRT-8.4.1.5.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz
rm -rf TensorRT-8.4.1.5.Linux.x86_64-gnu.cuda-11.6.cudnn8.4.tar.gz
fi

# build vision、runtime、backend
docker run -it --rm --name build_fd_libs \
-v`pwd`/..:/workspace/fastdeploy \
-e "http_proxy=${http_proxy}" \
-e "https_proxy=${https_proxy}" \
nvidia/cuda:11.2.2-cudnn8-devel-ubuntu20.04 \
bash -c \
'cd /workspace/fastdeploy/python;
rm -rf .setuptools-cmake-build dist build fastdeploy/libs/third_libs;
apt-get update;
apt-get install -y --no-install-recommends patchelf python3-dev python3-pip rapidjson-dev git;
ln -s /usr/bin/python3 /usr/bin/python;
export PATH=/workspace/fastdeploy/serving/cmake-3.18.6-Linux-x86_64/bin:$PATH;
export WITH_GPU=ON;
export ENABLE_TRT_BACKEND=OFF;
export TRT_DIRECTORY=/workspace/fastdeploy/serving/TensorRT-8.4.1.5/;
export ENABLE_ORT_BACKEND=OFF;
export ENABLE_PADDLE_BACKEND=OFF;
export ENABLE_OPENVINO_BACKEND=OFF;
export ENABLE_VISION=ON;
export ENABLE_TEXT=ON;
python setup.py build;
python setup.py bdist_wheel;
cd /workspace/fastdeploy;
rm -rf build; mkdir -p build;cd build;
cmake .. -DENABLE_TRT_BACKEND=ON -DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy_install -DWITH_GPU=ON -DTRT_DIRECTORY=/workspace/fastdeploy/serving/TensorRT-8.4.1.5/ -DENABLE_PADDLE_BACKEND=ON -DENABLE_ORT_BACKEND=ON -DENABLE_OPENVINO_BACKEND=ON -DENABLE_VISION=ON -DBUILD_FASTDEPLOY_PYTHON=OFF -DENABLE_PADDLE2ONNX=ON -DENABLE_TEXT=OFF -DLIBRARY_NAME=fastdeploy_runtime;
make -j`nproc`;
make install;
cd /workspace/fastdeploy/serving;
rm -rf build; mkdir build; cd build;
cmake .. -DFASTDEPLOY_DIR=/workspace/fastdeploy/build/fastdeploy_install -DTRITON_COMMON_REPO_TAG=r21.10 -DTRITON_CORE_REPO_TAG=r21.10 -DTRITON_BACKEND_REPO_TAG=r21.10;
make -j`nproc`'

0 comments on commit 8c651f9

Please sign in to comment.