diff --git a/examples/text/ernie-3.0/cpp/README.md b/examples/text/ernie-3.0/cpp/README.md
index 65ca4100e0..f9d2b2f9b1 100755
--- a/examples/text/ernie-3.0/cpp/README.md
+++ b/examples/text/ernie-3.0/cpp/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation.
-- 1. Environment of software and hardware should meet the requirements. Please refer to[FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Based on the develop environment, download the precompiled deployment library and samples code. Please refer to [FastDeploy Precompiled Library](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
diff --git a/examples/text/ernie-3.0/python/README.md b/examples/text/ernie-3.0/python/README.md
index 9fb6414785..5fc7e212cb 100755
--- a/examples/text/ernie-3.0/python/README.md
+++ b/examples/text/ernie-3.0/python/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation.
-- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Environment of software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. FastDeploy Python whl package should be installed. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides deployment examples that seq_cls_inferve.py fast finish text classification tasks on CPU/GPU.
diff --git a/examples/text/ernie-3.0/serving/README.md b/examples/text/ernie-3.0/serving/README.md
index 9fc94dc459..0aa9faa23b 100644
--- a/examples/text/ernie-3.0/serving/README.md
+++ b/examples/text/ernie-3.0/serving/README.md
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
Before serving deployment, you need to confirm
-- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands of serving images.
+- 1. Refer to [FastDeploy Serving Deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands of serving images.
## Prepare Models
@@ -174,4 +174,4 @@ entity: 华夏 label: LOC pos: [14, 15]
```
## Configuration Modification
-The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/zh_CN/model_configuration.md)
+The current classification task (ernie_seqcls_model/config.pbtxt) is by default configured to run the OpenVINO engine on CPU; the sequence labelling task is by default configured to run the Paddle engine on GPU. If you want to run on CPU/GPU or other inference engines, you should modify the configuration. please refer to the [configuration document.](../../../../serving/docs/EN/model_configuration-en.md)
diff --git a/examples/text/ernie-3.0/serving/README_CN.md b/examples/text/ernie-3.0/serving/README_CN.md
index 8de633bfb1..c6de52c227 100644
--- a/examples/text/ernie-3.0/serving/README_CN.md
+++ b/examples/text/ernie-3.0/serving/README_CN.md
@@ -4,7 +4,7 @@
在服务化部署前,需确认
-- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
+- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)
## 准备模型
diff --git a/examples/text/uie/README.md b/examples/text/uie/README.md
index c8e5d8eb20..5ae8d46883 100644
--- a/examples/text/uie/README.md
+++ b/examples/text/uie/README.md
@@ -19,7 +19,7 @@ English | [简体中文](README_CN.md)
## Export Deployment Models
-Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2)
+Before deployment, you need to export the UIE model into the deployment model. Please refer to [Export Model](https://github.com/PaddlePaddle/PaddleNLP/tree/release/2.4/model_zoo/uie#47-%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).
## Download Pre-trained Models
diff --git a/examples/text/uie/python/README.md b/examples/text/uie/python/README.md
index 54c2da2a36..c7d4715fa1 100644
--- a/examples/text/uie/python/README.md
+++ b/examples/text/uie/python/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps need to be confirmed.
-- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 1. The software and hardware environment meets the requirements. Please refer to [Environment requirements for FastDeploy](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. FastDeploy Python whl pacakage needs installation. Please refer to [FastDeploy Python Installation](../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides an example that `infer.py` quickly complete CPU deployment conducted by the UIE model with OpenVINO acceleration on CPU/GPU and CPU.
@@ -348,7 +348,7 @@ fd.text.uie.UIEModel(model_file,
schema_language=SchemaLanguage.ZH)
```
-UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2).`vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py)
+UIEModel loading and initialization. Among them, `model_file`, `params_file` are Paddle inference documents exported by trained models. Please refer to [Model export](https://github.com/PaddlePaddle/PaddleNLP/blob/develop/model_zoo/uie/README.md#%E6%A8%A1%E5%9E%8B%E9%83%A8%E7%BD%B2). `vocab_file`refers to the vocabulary file. The vocabulary of the UIE model UIE can be downloaded in [UIE configuration file](https://github.com/PaddlePaddle/PaddleNLP/blob/5401f01af85f1c73d8017c6b3476242fce1e6d52/model_zoo/uie/utils.py).
**Parameter**
diff --git a/examples/text/uie/serving/README.md b/examples/text/uie/serving/README.md
index 2aa3fbbb84..fb13f54aad 100644
--- a/examples/text/uie/serving/README.md
+++ b/examples/text/uie/serving/README.md
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
Before serving deployment, you need to confirm:
-- 1. You can refer to [FastDeploy serving deployment](../../../../../serving/README_CN.md) for hardware and software environment requirements and image pull commands for serving images.
+- 1. You can refer to [FastDeploy serving deployment](../../../../serving/README.md) for hardware and software environment requirements and image pull commands for serving images.
## Prepare models
@@ -143,4 +143,4 @@ results:
## Configuration Modification
-The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/zh_CN/model_configuration.md).
+The current configuration is by default to run the paddle engine on CPU. If you want to run on CPU/GPU or other inference engines, modifying the configuration is needed.Please refer to [Configuration Document](../../../../serving/docs/EN/model_configuration-en.md).
diff --git a/examples/text/uie/serving/README_CN.md b/examples/text/uie/serving/README_CN.md
index cbf9a33730..01dc7dcc79 100644
--- a/examples/text/uie/serving/README_CN.md
+++ b/examples/text/uie/serving/README_CN.md
@@ -4,7 +4,7 @@
在服务化部署前,需确认
-- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../../serving/README_CN.md)
+- 1. 服务化镜像的软硬件环境要求和镜像拉取命令请参考[FastDeploy服务化部署](../../../../serving/README_CN.md)
## 准备模型
diff --git a/examples/vision/README.md b/examples/vision/README.md
index cd271e5587..986c86fa93 100755
--- a/examples/vision/README.md
+++ b/examples/vision/README.md
@@ -32,5 +32,5 @@ Targeted at the vision suite of PaddlePaddle and external popular models, FastDe
- Model Loading
- Calling the `predict`interface
-When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/cn/faq/how_to_change_backend.md).
+When deploying visual models, FastDeploy supports one-click switching of the backend inference engine. Please refer to [How to switch model inference engine](../../docs/en/faq/how_to_change_backend.md).
diff --git a/examples/vision/README_CN.md b/examples/vision/README_CN.md
index 71492d8f01..7306a5f4f8 100644
--- a/examples/vision/README_CN.md
+++ b/examples/vision/README_CN.md
@@ -1,4 +1,4 @@
-[English](README_EN.md) | 简体中文
+[English](README.md) | 简体中文
# 视觉模型部署
本目录下提供了各类视觉模型的部署,主要涵盖以下任务类型
diff --git a/examples/vision/classification/paddleclas/README.md b/examples/vision/classification/paddleclas/README.md
index db66eb7013..f6033b0378 100644
--- a/examples/vision/classification/paddleclas/README.md
+++ b/examples/vision/classification/paddleclas/README.md
@@ -21,7 +21,7 @@ Now FastDeploy supports the deployment of the following models
## Prepare PaddleClas Deployment Model
-For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
+For PaddleClas model export, refer to [Model Export](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA).
Attention:The model exported by PaddleClas contains two files, including `inference.pdmodel` and `inference.pdiparams`. However, it is necessary to prepare the generic [inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml) file provided by PaddleClas to meet the requirements of deployment. FastDeploy will obtain from the yaml file the preprocessing information required during inference. FastDeploy will get the preprocessing information needed by the model from the yaml file. Developers can directly download this file. But they need to modify the configuration parameters in the yaml file based on personalized needs. Refer to the configuration information in the infer section of the PaddleClas model training [config.](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)
diff --git a/examples/vision/classification/paddleclas/a311d/README.md b/examples/vision/classification/paddleclas/a311d/README.md
index e4a57bcf6c..e43afb3c66 100755
--- a/examples/vision/classification/paddleclas/a311d/README.md
+++ b/examples/vision/classification/paddleclas/a311d/README.md
@@ -2,7 +2,7 @@ English | [简体中文](README_CN.md)
# Deploy PaddleClas Quantification Model on A311D
Now FastDeploy supports the deployment of PaddleClas quantification model to A311D based on Paddle Lite.
-For model quantification and download, refer to [model quantification](../quantize/README.md)
+For model quantification and download, refer to [model quantification](../quantize/README.md).
## Detailed Deployment Tutorials
diff --git a/examples/vision/classification/paddleclas/a311d/cpp/README.md b/examples/vision/classification/paddleclas/a311d/cpp/README.md
index c7f6afa1e4..9100ae94ae 100755
--- a/examples/vision/classification/paddleclas/a311d/cpp/README.md
+++ b/examples/vision/classification/paddleclas/a311d/cpp/README.md
@@ -1,26 +1,27 @@
-# PaddleClas A311D 开发板 C++ 部署示例
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
+English | [简体中文](README_CN.md)
+# PaddleClas A311D Development Board C++ Deployment Example
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on A311D.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
-### 量化模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
+### Quantization Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md).
-## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
-请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized ResNet50_Vd Segmentation model on A311D
+Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on A311D.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir models && mkdir images
@@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
cp -r ILSVRC2012_val_00000010.jpeg images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
+5. Deploy the ResNet50 segmentation model to A311D based on adb. You can run the following lines:
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
diff --git a/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md b/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
index e69de29bb2..2b0969bc03 100644
--- a/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
+++ b/examples/vision/classification/paddleclas/a311d/cpp/README_CN.md
@@ -0,0 +1,57 @@
+[English](README.md) | 简体中文
+# PaddleClas A311D 开发板 C++ 部署示例
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 A311D 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+
+### 量化模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 A311D 上部署量化后的 ResNet50_Vd 分类模型
+请按照以下步骤完成在 A311D 上部署 ResNet50_Vd 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
+mkdir models && mkdir images
+wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
+tar -xvf resnet50_vd_ptq.tar
+cp -r resnet50_vd_ptq models
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+cp -r ILSVRC2012_val_00000010.jpeg images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 ResNet50 分类模型到晶晨 A311D,可使用如下命令:
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/classification/paddleclas/a311d/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/classification/paddleclas/android/README.md b/examples/vision/classification/paddleclas/android/README.md
index 68dbfc939b..29aa94a0d3 100644
--- a/examples/vision/classification/paddleclas/android/README.md
+++ b/examples/vision/classification/paddleclas/android/README.md
@@ -148,4 +148,4 @@ set(FastDeploy_DIR "${CMAKE_CURRENT_SOURCE_DIR}/../../../libs/fastdeploy-android
## More Reference Documents
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
- [Use FastDeploy Java SDK in Android](../../../../../java/android/)
-- [Use FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
+- [Use FastDeploy C++ SDK in Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/classification/paddleclas/cpp/README.md b/examples/vision/classification/paddleclas/cpp/README.md
index d428d8a08d..8d85c24fcd 100755
--- a/examples/vision/classification/paddleclas/cpp/README.md
+++ b/examples/vision/classification/paddleclas/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation.
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
Taking ResNet50_vd inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -81,4 +81,4 @@ PaddleClas model loading and initialization, where model_file and params_file ar
- [Model Description](../../)
- [Python Deployment](../python)
- [Visual Model prediction results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/classification/paddleclas/python/README.md b/examples/vision/classification/paddleclas/python/README.md
index dd45dbe497..cb3f9d228f 100755
--- a/examples/vision/classification/paddleclas/python/README.md
+++ b/examples/vision/classification/paddleclas/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation.
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install the FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Install the FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -77,4 +77,4 @@ PaddleClas model loading and initialization, where model_file and params_file ar
- [PaddleClas Model Description](..)
- [PaddleClas C++ Deployment](../cpp)
- [Model prediction results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/classification/paddleclas/quantize/cpp/README.md b/examples/vision/classification/paddleclas/quantize/cpp/README.md
index 999cc7fde3..70efc2dfed 100755
--- a/examples/vision/classification/paddleclas/quantize/cpp/README.md
+++ b/examples/vision/classification/paddleclas/quantize/cpp/README.md
@@ -1,36 +1,37 @@
-# PaddleClas 量化模型 C++部署示例
-本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
+English | [简体中文](README_CN.md)
+# PaddleClas Quantitative Model C++ Deployment Example
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的ResNet50_Vd模型为例, 进行部署,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
-在本目录执行如下命令即可完成编译,以及量化模型部署.
+## Take the Quantized PP-YOLOE-l Model as an example for Deployment, FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0)
+Run the following commands in this directory to compile and deploy the quantized model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
+# Download the ResNet50_Vd quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
tar -xvf resnet50_vd_ptq.tar
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 0
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 1
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 2
```
diff --git a/examples/vision/classification/paddleclas/quantize/cpp/README_CN.md b/examples/vision/classification/paddleclas/quantize/cpp/README_CN.md
new file mode 100644
index 0000000000..df57971317
--- /dev/null
+++ b/examples/vision/classification/paddleclas/quantize/cpp/README_CN.md
@@ -0,0 +1,37 @@
+[English](README.md) | 简体中文
+# PaddleClas 量化模型 C++部署示例
+本目录下提供的`infer.cc`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+## 以量化后的ResNet50_Vd模型为例, 进行部署,支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+在本目录执行如下命令即可完成编译,以及量化模型部署.
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
+tar -xvf resnet50_vd_ptq.tar
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+
+
+# 在CPU上使用ONNX Runtime推理量化模型
+./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 0
+# 在GPU上使用TensorRT推理量化模型
+./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 1
+# 在GPU上使用Paddle-TensorRT推理量化模型
+./infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg 2
+```
diff --git a/examples/vision/classification/paddleclas/quantize/python/README.md b/examples/vision/classification/paddleclas/quantize/python/README.md
index a7fc4f9d3a..d782aa05fc 100755
--- a/examples/vision/classification/paddleclas/quantize/python/README.md
+++ b/examples/vision/classification/paddleclas/quantize/python/README.md
@@ -1,31 +1,32 @@
-# PaddleClas 量化模型 Python部署示例
-本目录下提供的`infer.py`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
+English | [简体中文](README_CN.md)
+# PaddleClas Quantitative Model Python Deployment Example
+ `infer.py` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的ResNet50_Vd模型为例, 进行部署
+## Take the Quantized ResNet50_Vd Model as an example for Deployment
```bash
-#下载部署示例代码
+# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/classification/paddleclas/quantize/python
-#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
+# Download the ResNet50_Vd quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
tar -xvf resnet50_vd_ptq.tar
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device cpu --backend ort
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend trt
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend pptrt
```
diff --git a/examples/vision/classification/paddleclas/quantize/python/README_CN.md b/examples/vision/classification/paddleclas/quantize/python/README_CN.md
new file mode 100644
index 0000000000..e875130e69
--- /dev/null
+++ b/examples/vision/classification/paddleclas/quantize/python/README_CN.md
@@ -0,0 +1,32 @@
+[English](README.md) | 简体中文
+# PaddleClas 量化模型 Python部署示例
+本目录下提供的`infer.py`,可以帮助用户快速完成PaddleClas量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+
+## 以量化后的ResNet50_Vd模型为例, 进行部署
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vision/classification/paddleclas/quantize/python
+
+#下载FastDeloy提供的ResNet50_Vd量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
+tar -xvf resnet50_vd_ptq.tar
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+
+# 在CPU上使用ONNX Runtime推理量化模型
+python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device cpu --backend ort
+# 在GPU上使用TensorRT推理量化模型
+python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend trt
+# 在GPU上使用Paddle-TensorRT推理量化模型
+python infer.py --model resnet50_vd_ptq --image ILSVRC2012_val_00000010.jpeg --device gpu --backend pptrt
+```
diff --git a/examples/vision/classification/paddleclas/rknpu2/cpp/README.md b/examples/vision/classification/paddleclas/rknpu2/cpp/README.md
index c21d1d77b3..42ad1aecd0 100644
--- a/examples/vision/classification/paddleclas/rknpu2/cpp/README.md
+++ b/examples/vision/classification/paddleclas/rknpu2/cpp/README.md
@@ -1,28 +1,29 @@
-# PaddleClas C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleClas C++ Deployment Example
-本目录下用于展示 ResNet50_vd 模型在RKNPU2上的部署,以下的部署过程以 ResNet50_vd 为例子。
+This directory demonstrates the deployment of ResNet50_vd model on RKNPU2. The following deployment process takes ResNet50_vd as an example.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── images # 存放图片的文件夹
+├── build # Compile Folder
+├── images # Folder for images
├── infer.cc
-├── ppclas_model_dir # 存放模型文件的文件夹
-└── thirdpartys # 存放sdk的文件夹
+├── ppclas_model_dir # Folder for models
+└── thirdpartys # Folder for sdk
```
-首先需要先生成目录结构
+First, please build a directory structure
```bash
mkdir build
mkdir images
@@ -30,23 +31,22 @@ mkdir ppclas_model_dir
mkdir thirdpartys
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
-fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
-转换为RKNN后的模型文件也需要拷贝至model,转换方案: ([ResNet50_vd RKNN模型](../README.md))。
+### Copy model and configuration files to folder Model
+In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
+After converting to RKNN, the model file also needs to be copied to folder model. Please refer to ([ResNet50_vd RKNN model](../README.md)).
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
```
-### 编译example
+### Compile example
```bash
cd build
@@ -55,24 +55,23 @@ make -j8
make install
```
-## 运行例程
+## Running Routines
```bash
cd ./build/install
./rknpu_test ./ppclas_model_dir ./images/ILSVRC2012_val_00000010.jpeg
```
-## 运行结果展示
+## Results
ClassifyResult(
label_ids: 153,
scores: 0.684570,
)
-## 注意事项
-RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
-DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
+## Notes
+The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisablePermute(C++) or disable_permute(Python) first when deploying with FastDeploy to disable data format conversion in the preprocessing stage.
-## 其它文档
-- [ResNet50_vd Python 部署](../python)
-- [模型预测结果说明](../../../../../../docs/api/vision_results/)
-- [转换ResNet50_vd RKNN模型文档](../README.md)
+## Other Documents
+- [ResNet50_vd Python Deployment](../python)
+- [Prediction results](../../../../../../docs/api/vision_results/)
+- [Converting ResNet50_vd RKNN model](../README.md)
diff --git a/examples/vision/classification/paddleclas/rknpu2/cpp/README_CN.md b/examples/vision/classification/paddleclas/rknpu2/cpp/README_CN.md
new file mode 100644
index 0000000000..7a9c829993
--- /dev/null
+++ b/examples/vision/classification/paddleclas/rknpu2/cpp/README_CN.md
@@ -0,0 +1,77 @@
+[English](README.md) | 简体中文
+# PaddleClas C++部署示例
+
+本目录下用于展示 ResNet50_vd 模型在RKNPU2上的部署,以下的部署过程以 ResNet50_vd 为例子。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+
+以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── images # 存放图片的文件夹
+├── infer.cc
+├── ppclas_model_dir # 存放模型文件的文件夹
+└── thirdpartys # 存放sdk的文件夹
+```
+
+首先需要先生成目录结构
+```bash
+mkdir build
+mkdir images
+mkdir ppclas_model_dir
+mkdir thirdpartys
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
+转换为RKNN后的模型文件也需要拷贝至model,转换方案: ([ResNet50_vd RKNN模型](../README.md))。
+
+### 准备测试图片至image文件夹
+```bash
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+```
+
+### 编译example
+
+```bash
+cd build
+cmake ..
+make -j8
+make install
+```
+
+## 运行例程
+
+```bash
+cd ./build/install
+./rknpu_test ./ppclas_model_dir ./images/ILSVRC2012_val_00000010.jpeg
+```
+
+## 运行结果展示
+ClassifyResult(
+label_ids: 153,
+scores: 0.684570,
+)
+
+## 注意事项
+RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
+
+## 其它文档
+- [ResNet50_vd Python 部署](../python)
+- [模型预测结果说明](../../../../../../docs/api/vision_results/)
+- [转换ResNet50_vd RKNN模型文档](../README.md)
diff --git a/examples/vision/classification/paddleclas/rknpu2/python/README.md b/examples/vision/classification/paddleclas/rknpu2/python/README.md
index f1f0994d85..63bfd9c098 100644
--- a/examples/vision/classification/paddleclas/rknpu2/python/README.md
+++ b/examples/vision/classification/paddleclas/rknpu2/python/README.md
@@ -1,23 +1,24 @@
-# PaddleClas Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleClas Python Deployment Example
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md).
-本目录下提供`infer.py`快速完成 ResNet50_vd 在RKNPU上部署的示例。执行如下脚本即可完成
+This directory provides `infer.py` for a quick example of ResNet50_vd deployment on RKNPU. This can be done by running the following script.
```bash
-# 下载部署示例代码
+# Download the deploying demo code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/classification/paddleclas/rknpu2/python
-# 下载图片
+# Download images.
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
-# 推理
+# Inference.
python3 infer.py --model_file ./ResNet50_vd_infer/ResNet50_vd_infer_rk3588.rknn --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
-# 运行完成后返回结果如下所示
+# Results
ClassifyResult(
label_ids: 153,
scores: 0.684570,
@@ -25,11 +26,10 @@ scores: 0.684570,
```
-## 注意事项
-RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
-DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
+## Notes
+The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisablePermute(C++) or disable_permute(Python) first when deploying with FastDeploy to disable data format conversion in the preprocessing stage.
-## 其它文档
-- [ResNet50_vd C++部署](../cpp)
-- [模型预测结果说明](../../../../../../docs/api/vision_results/)
-- [转换ResNet50_vd RKNN模型文档](../README.md)
+## Other Documents
+- [ResNet50_vd C++ Deployment](../cpp)
+- [Prediction Results](../../../../../../docs/api/vision_results/)
+- [Converting ResNet50_vd RKNN model](../README.md)
diff --git a/examples/vision/classification/paddleclas/rknpu2/python/README_CN.md b/examples/vision/classification/paddleclas/rknpu2/python/README_CN.md
new file mode 100644
index 0000000000..1118f8cc65
--- /dev/null
+++ b/examples/vision/classification/paddleclas/rknpu2/python/README_CN.md
@@ -0,0 +1,35 @@
+[English](README.md) | 简体中文
+# PaddleClas Python部署示例
+
+在部署前,需确认以下两个步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+
+本目录下提供`infer.py`快速完成 ResNet50_vd 在RKNPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/classification/paddleclas/rknpu2/python
+
+# 下载图片
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+
+# 推理
+python3 infer.py --model_file ./ResNet50_vd_infer/ResNet50_vd_infer_rk3588.rknn --config_file ResNet50_vd_infer/inference_cls.yaml --image ILSVRC2012_val_00000010.jpeg
+
+# 运行完成后返回结果如下所示
+ClassifyResult(
+label_ids: 153,
+scores: 0.684570,
+)
+```
+
+
+## 注意事项
+RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisablePermute(C++)或`disable_permute(Python),在预处理阶段禁用数据格式的转换。
+
+## 其它文档
+- [ResNet50_vd C++部署](../cpp)
+- [模型预测结果说明](../../../../../../docs/api/vision_results/)
+- [转换ResNet50_vd RKNN模型文档](../README.md)
diff --git a/examples/vision/classification/paddleclas/rv1126/README.md b/examples/vision/classification/paddleclas/rv1126/README.md
index 38ed27825c..4728a52f98 100755
--- a/examples/vision/classification/paddleclas/rv1126/README.md
+++ b/examples/vision/classification/paddleclas/rv1126/README.md
@@ -2,7 +2,7 @@ English | [简体中文](README_CN.md)
# PaddleClas Quantification Model Deployment on RV1126
FastDeploy currently supports the deployment of PaddleClas quantification models to RV1126 based on Paddle Lite.
-For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md)
+For model quantization and download of quantized models, refer to [Model Quantization](../quantize/README.md).
## Detailed Deployment Tutorials
diff --git a/examples/vision/classification/paddleclas/rv1126/cpp/README.md b/examples/vision/classification/paddleclas/rv1126/cpp/README.md
index b621ff7200..9187d99a71 100755
--- a/examples/vision/classification/paddleclas/rv1126/cpp/README.md
+++ b/examples/vision/classification/paddleclas/rv1126/cpp/README.md
@@ -1,26 +1,27 @@
-# PaddleClas RV1126 开发板 C++ 部署示例
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 RV1126 上的部署推理加速。
+English | [简体中文](README_CN.md)
+# PaddleClas RV1126 Development Board C++ Deployment Example
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PaddleClas quantization model deployment on RV1126.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
-### 量化模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
+### Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the inference_cls.yaml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md).
-## 在 RV1126 上部署量化后的 ResNet50_Vd 分类模型
-请按照以下步骤完成在 RV1126 上部署 ResNet50_Vd 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized ResNet50_Vd Segmentation model on RV1126
+Please follow these steps to complete the deployment of the ResNet50_Vd quantization model on RV1126.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite).
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
mkdir models && mkdir images
@@ -31,26 +32,26 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
cp -r ILSVRC2012_val_00000010.jpeg images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 ResNet50 分类模型到 Rockchip RV1126,可使用如下命令:
+5. Deploy the ResNet50 segmentation model to Rockchip RV1126 based on adb. You can run the following lines:
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md).
diff --git a/examples/vision/classification/paddleclas/rv1126/cpp/README_CN.md b/examples/vision/classification/paddleclas/rv1126/cpp/README_CN.md
new file mode 100644
index 0000000000..7777fc2f4b
--- /dev/null
+++ b/examples/vision/classification/paddleclas/rv1126/cpp/README_CN.md
@@ -0,0 +1,57 @@
+[English](README.md) | 简体中文
+# PaddleClas RV1126 开发板 C++ 部署示例
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PaddleClas 量化模型在 RV1126 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+
+### 量化模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的inference_cls.yaml文件, 自行量化的模型文件夹内不包含此 yaml 文件, 用户从 FP32 模型文件夹下复制此 yaml 文件到量化后的模型文件夹内即可.)
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 RV1126 上部署量化后的 ResNet50_Vd 分类模型
+请按照以下步骤完成在 RV1126 上部署 ResNet50_Vd 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
+mkdir models && mkdir images
+wget https://bj.bcebos.com/paddlehub/fastdeploy/resnet50_vd_ptq.tar
+tar -xvf resnet50_vd_ptq.tar
+cp -r resnet50_vd_ptq models
+wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
+cp -r ILSVRC2012_val_00000010.jpeg images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 ResNet50 分类模型到 Rockchip RV1126,可使用如下命令:
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/classification/paddleclas/rv1126/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo resnet50_vd_ptq ILSVRC2012_val_00000010.jpeg $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/classification/paddleclas/serving/README.md b/examples/vision/classification/paddleclas/serving/README.md
index 97a7899cdb..be75cd001e 100644
--- a/examples/vision/classification/paddleclas/serving/README.md
+++ b/examples/vision/classification/paddleclas/serving/README.md
@@ -3,7 +3,7 @@ English | [简体中文](README_CN.md)
Before the service deployment, please confirm
-- 1. Refer to [FastDeploy Service Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
+- 1. Refer to [FastDeploy Service Deployment](../../../../../serving/README.md) for software and hardware environment requirements and image pull commands.
## Start the Service
@@ -39,7 +39,7 @@ CUDA_VISIBLE_DEVICES=0 fastdeployserver --model-repository=/serving/models --bac
```
>> **Attention**:
->> To pull images from other hardware, refer to [Service Deployment Master Document](../../../../../serving/README_CN.md)
+>> To pull images from other hardware, refer to [Service Deployment Master Document](../../../../../serving/README.md)
>> If "Address already in use" appears when running fastdeployserver to start the service, use `--grpc-port` to specify the port number and change the request port number in the client demo.
@@ -76,4 +76,4 @@ output_name: CLAS_RESULT
## Configuration Change
-The current default configuration runs the TensorRT engine on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
+The current default configuration runs the TensorRT engine on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/EN/model_configuration-en.md) for more information.
diff --git a/examples/vision/classification/paddleclas/sophgo/python/README.md b/examples/vision/classification/paddleclas/sophgo/python/README.md
index cc0c6f5704..ba64406c28 100644
--- a/examples/vision/classification/paddleclas/sophgo/python/README.md
+++ b/examples/vision/classification/paddleclas/sophgo/python/README.md
@@ -3,7 +3,7 @@ English | [简体中文](README_CN.md)
Before deployment, the following step need to be confirmed:
-- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md)
+- 1. Hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirement](../../../../../../docs/en/build_and_install/sophgo.md).
`infer.py` in this directory provides a quick example of deployment of the ResNet50_vd model on SOPHGO TPU. Please run the following script:
diff --git a/examples/vision/classification/resnet/README.md b/examples/vision/classification/resnet/README.md
index fae19264f5..c4ce306f44 100644
--- a/examples/vision/classification/resnet/README.md
+++ b/examples/vision/classification/resnet/README.md
@@ -1,10 +1,10 @@
English | [简体中文](README_CN.md)
# ResNet Ready-to-deploy Model
-- ResNet Deployment is based on the code of [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0) and [Pre-trained Models on ImageNet2012](https://github.com/pytorch/vision/tree/v0.12.0)。
+- ResNet Deployment is based on the code of [Torchvision](https://github.com/pytorch/vision/tree/v0.12.0) and [Pre-trained Models on ImageNet2012](https://github.com/pytorch/vision/tree/v0.12.0).
- - (1)Deployment is conducted after [Export ONNX Model](#导出ONNX模型) by the *.pt provided by [Official Repository](https://github.com/pytorch/vision/tree/v0.12.0);
- - (2)The ResNet Model trained by personal data should [Export ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Tutorials](#详细部署文档) for deployment.
+ - (1)Deployment is conducted after [Export ONNX Model](#Export-the-ONNX-Model) by the *.pt provided by [Official Repository](https://github.com/pytorch/vision/tree/v0.12.0);
+ - (2)The ResNet Model trained by personal data should [Export ONNX Model](#Export-the-ONNX-Model). Please refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Documents) for deployment.
## Export the ONNX Model
diff --git a/examples/vision/classification/resnet/cpp/README.md b/examples/vision/classification/resnet/cpp/README.md
index b828d059c8..3caf4727b6 100644
--- a/examples/vision/classification/resnet/cpp/README.md
+++ b/examples/vision/classification/resnet/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation.
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
Taking ResNet50 inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -33,7 +33,7 @@ wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/Ima
```
The above command works for Linux or MacOS. Refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
## ResNet C++ Interface
@@ -74,4 +74,4 @@ fastdeploy::vision::classification::ResNet(
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/classification/resnet/python/README.md b/examples/vision/classification/resnet/python/README.md
index f210d22c6b..7659fcd0da 100644
--- a/examples/vision/classification/resnet/python/README.md
+++ b/examples/vision/classification/resnet/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides examples that `infer.py` fast finishes the deployment of ResNet50_vd on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -70,4 +70,4 @@ fd.vision.classification.ResNet(model_file, params_file, runtime_option=None, mo
- [ResNet Model Description](..)
- [ResNet C++ Deployment](../cpp)
- [Model prediction results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/classification/yolov5cls/README.md b/examples/vision/classification/yolov5cls/README.md
index 87ae9ddb98..7a75f88c29 100644
--- a/examples/vision/classification/yolov5cls/README.md
+++ b/examples/vision/classification/yolov5cls/README.md
@@ -2,8 +2,8 @@ English | [简体中文](README_CN.md)
# YOLOv5Cls Ready-to-deploy Model
-- YOLOv5Cls v6.2 model deployment is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2) and [Pre-trained Models on ImageNet](https://github.com/ultralytics/yolov5/releases/tag/v6.2)
- - (1)The *-cls.pt model provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v6.2) can export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5), then deployment can be conducted;
+- YOLOv5Cls v6.2 model deployment is based on [YOLOv5](https://github.com/ultralytics/yolov5/tree/v6.2) and [Pre-trained Models on ImageNet](https://github.com/ultralytics/yolov5/releases/tag/v6.2).
+ - (1)The *-cls.pt model provided by [Official Repository](https://github.com/ultralytics/yolov5/releases/tag/v6.2) can export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5), then deployment can be conducted;
- (2)The YOLOv5Cls v6.2 Model trained by personal data should export the ONNX file using `export.py` in [YOLOv5](https://github.com/ultralytics/yolov5).
diff --git a/examples/vision/classification/yolov5cls/cpp/README.md b/examples/vision/classification/yolov5cls/cpp/README.md
index 8fb38ec359..9681e75fc5 100755
--- a/examples/vision/classification/yolov5cls/cpp/README.md
+++ b/examples/vision/classification/yolov5cls/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that ` infer.cc` fast finishes the deployment o
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
Taking CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -41,7 +41,7 @@ scores: 0.196327,
```
The above command works for Linux or MacOS. Refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md) for SDK use-pattern in Windows.
## YOLOv5Cls C++ Interface
@@ -87,4 +87,4 @@ YOLOv5Cls model loading and initialization, among which model_file is the export
- [YOLOv5Cls Model Description](..)
- [YOLOv5Cls Python Deployment](../python)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/classification/yolov5cls/python/README.md b/examples/vision/classification/yolov5cls/python/README.md
index f964d73b62..05207a4ec4 100755
--- a/examples/vision/classification/yolov5cls/python/README.md
+++ b/examples/vision/classification/yolov5cls/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation.
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Cls on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -71,4 +71,4 @@ YOLOv5Cls model loading and initialization, among which model_file is the export
- [YOLOv5Cls Model Description](..)
- [YOLOv5Cls C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/fastestdet/cpp/README.md b/examples/vision/detection/fastestdet/cpp/README.md
index eca2c7b2d9..aa716dd9d7 100644
--- a/examples/vision/detection/fastestdet/cpp/README.md
+++ b/examples/vision/detection/fastestdet/cpp/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
This directory provides examples that `infer.cc` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT.
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory.
@@ -35,7 +35,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## FastestDet C++ Interface
@@ -84,4 +84,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/fastestdet/python/README.md b/examples/vision/detection/fastestdet/python/README.md
index 492683ec72..b6586ea258 100644
--- a/examples/vision/detection/fastestdet/python/README.md
+++ b/examples/vision/detection/fastestdet/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of FastestDet on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -72,4 +72,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [FastestDet Model Description](..)
- [FastestDet C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/nanodet_plus/README.md b/examples/vision/detection/nanodet_plus/README.md
index bc5b31f756..cd0660320f 100644
--- a/examples/vision/detection/nanodet_plus/README.md
+++ b/examples/vision/detection/nanodet_plus/README.md
@@ -5,7 +5,7 @@ English | [简体中文](README_CN.md)
- NanoDetPlus deployment is based on the code of [NanoDetPlus](https://github.com/RangiLyu/nanodet/tree/v1.0.0-alpha-1) and coco's [Pre-trained Model](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1).
- (1)The *.onnx provided by [official repository](https://github.com/RangiLyu/nanodet/releases/tag/v1.0.0-alpha-1) can directly conduct the deployment;
- - (2)Models trained by developers should export ONNX models. Please refer to [Detailed Deployment Documents](#详细部署文档) for deployment.
+ - (2)Models trained by developers should export ONNX models. Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) for deployment.
## Download Pre-trained ONNX Model
diff --git a/examples/vision/detection/nanodet_plus/cpp/README.md b/examples/vision/detection/nanodet_plus/cpp/README.md
index 8ef4b6a97e..fca5c6ee8f 100644
--- a/examples/vision/detection/nanodet_plus/cpp/README.md
+++ b/examples/vision/detection/nanodet_plus/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -37,7 +37,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## NanoDetPlus C++ Interface
@@ -91,4 +91,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/nanodet_plus/python/README.md b/examples/vision/detection/nanodet_plus/python/README.md
index 2995cb2977..6e8218eb0f 100644
--- a/examples/vision/detection/nanodet_plus/python/README.md
+++ b/examples/vision/detection/nanodet_plus/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of NanoDetPlus on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [NanoDetPlus Model Description](..)
- [NanoDetPlus C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/paddledetection/a311d/cpp/README.md b/examples/vision/detection/paddledetection/a311d/cpp/README.md
index baf2016f4a..89064e9d80 100755
--- a/examples/vision/detection/paddledetection/a311d/cpp/README.md
+++ b/examples/vision/detection/paddledetection/a311d/cpp/README.md
@@ -1,29 +1,30 @@
-# PP-YOLOE 量化模型 C++ 部署示例
+English | [简体中文](README_CN.md)
+# PP-YOLOE Quantitative Model C++ Deployment Example
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
+ `infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on A311D.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction)
-### 模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
-3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
-4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+### Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe).
+3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
+4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md)
-## 在 A311D 上部署量化后的 PP-YOLOE 检测模型
-请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized PP-YOLOE Detection model on A311D
+Please follow these steps to complete the deployment of the PP-YOLOE quantization model on A311D.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
mkdir models && mkdir images
@@ -34,26 +35,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
cp -r 000000014439.jpg images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
+5. Deploy the PP-YOLOE detection model to A311D based on adb.
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
diff --git a/examples/vision/detection/paddledetection/a311d/cpp/README_CN.md b/examples/vision/detection/paddledetection/a311d/cpp/README_CN.md
new file mode 100644
index 0000000000..6bcae99f16
--- /dev/null
+++ b/examples/vision/detection/paddledetection/a311d/cpp/README_CN.md
@@ -0,0 +1,60 @@
+[English](README.md) | 简体中文
+# PP-YOLOE 量化模型 C++ 部署示例
+
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 A311D 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+
+### 模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
+3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
+4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 A311D 上部署量化后的 PP-YOLOE 检测模型
+请按照以下步骤完成在 A311D 上部署 PP-YOLOE 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
+mkdir models && mkdir images
+wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
+tar -xvf ppyoloe_noshare_qat.tar.gz
+cp -r ppyoloe_noshare_qat models
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+cp -r 000000014439.jpg images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 PP-YOLOE 检测模型到晶晨 A311D
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/detection/paddledetection/a311d/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/detection/paddledetection/android/README.md b/examples/vision/detection/paddledetection/android/README.md
index 311a6b06e9..7ff544e121 100644
--- a/examples/vision/detection/paddledetection/android/README.md
+++ b/examples/vision/detection/paddledetection/android/README.md
@@ -150,4 +150,4 @@ It’s simple to replace the FastDeploy prediction library and models. The predi
## More Reference Documents
For more FastDeploy Java API documentes and how to access FastDeploy C++ API via JNI, refer to:
- [FastDeploy Java SDK in Android](../../../../../java/android/)
-- [FastDeploy C++ SDK in Android](../../../../../docs/cn/faq/use_cpp_sdk_on_android.md)
+- [FastDeploy C++ SDK in Android](../../../../../docs/en/faq/use_cpp_sdk_on_android.md)
diff --git a/examples/vision/detection/paddledetection/cpp/README.md b/examples/vision/detection/paddledetection/cpp/README.md
index 8ca675bd6f..b53d8ae484 100755
--- a/examples/vision/detection/paddledetection/cpp/README.md
+++ b/examples/vision/detection/paddledetection/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer_xxx.cc` fast finishes the deploymen
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -36,7 +36,7 @@ tar xvf ppyoloe_crn_l_300e_coco.tgz
```
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## PaddleDetection C++ Interface
@@ -52,7 +52,7 @@ fastdeploy::vision::detection::PPYOLOE(
const ModelFormat& model_format = ModelFormat::PADDLE)
```
-PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ONNX模型格式。
+Loading and initializing PaddleDetection PPYOLOE model, where the format of model_file is as the exported ONNX model.
**Parameter**
@@ -78,4 +78,4 @@ PaddleDetection PPYOLOE模型加载和初始化,其中model_file为导出的ON
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model prediction results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/paddledetection/python/serving/README.md b/examples/vision/detection/paddledetection/python/serving/README.md
deleted file mode 120000
index bacd3186b4..0000000000
--- a/examples/vision/detection/paddledetection/python/serving/README.md
+++ /dev/null
@@ -1 +0,0 @@
-README_CN.md
\ No newline at end of file
diff --git a/examples/vision/detection/paddledetection/python/serving/README.md b/examples/vision/detection/paddledetection/python/serving/README.md
new file mode 100644
index 0000000000..56049981da
--- /dev/null
+++ b/examples/vision/detection/paddledetection/python/serving/README.md
@@ -0,0 +1,36 @@
+English | [简体中文](README_CN.md)
+
+# PaddleDetection Python Simple Serving Demo
+
+
+## Environment
+
+- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+
+Server:
+```bash
+# Download demo code
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/paddledetection/python/serving
+
+# Download PPYOLOE model
+wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
+tar xvf ppyoloe_crn_l_300e_coco.tgz
+
+# Launch server, change the configurations in server.py to select hardware, backend, etc.
+# and use --host, --port to specify IP and port
+fastdeploy simple_serving --app server:app
+```
+
+Client:
+```bash
+# Download demo code
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/paddledetection/python/serving
+
+# Download test image
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# Send request and get inference result (Please adapt the IP and port if necessary)
+python client.py
+```
diff --git a/examples/vision/detection/paddledetection/python/serving/README_CN.md b/examples/vision/detection/paddledetection/python/serving/README_CN.md
index f73206ba35..2a1a8d3536 100644
--- a/examples/vision/detection/paddledetection/python/serving/README_CN.md
+++ b/examples/vision/detection/paddledetection/python/serving/README_CN.md
@@ -1,4 +1,4 @@
-简体中文 | [English](README_EN.md)
+简体中文 | [English](README.md)
# PaddleDetection Python轻量服务化部署示例
diff --git a/examples/vision/detection/paddledetection/python/serving/README_EN.md b/examples/vision/detection/paddledetection/python/serving/README_EN.md
deleted file mode 100644
index 56049981da..0000000000
--- a/examples/vision/detection/paddledetection/python/serving/README_EN.md
+++ /dev/null
@@ -1,36 +0,0 @@
-English | [简体中文](README_CN.md)
-
-# PaddleDetection Python Simple Serving Demo
-
-
-## Environment
-
-- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-
-Server:
-```bash
-# Download demo code
-git clone https://github.com/PaddlePaddle/FastDeploy.git
-cd FastDeploy/examples/vision/detection/paddledetection/python/serving
-
-# Download PPYOLOE model
-wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco.tgz
-tar xvf ppyoloe_crn_l_300e_coco.tgz
-
-# Launch server, change the configurations in server.py to select hardware, backend, etc.
-# and use --host, --port to specify IP and port
-fastdeploy simple_serving --app server:app
-```
-
-Client:
-```bash
-# Download demo code
-git clone https://github.com/PaddlePaddle/FastDeploy.git
-cd FastDeploy/examples/vision/detection/paddledetection/python/serving
-
-# Download test image
-wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-
-# Send request and get inference result (Please adapt the IP and port if necessary)
-python client.py
-```
diff --git a/examples/vision/detection/paddledetection/quantize/cpp/README.md b/examples/vision/detection/paddledetection/quantize/cpp/README.md
index 4511eb5089..9946459410 100755
--- a/examples/vision/detection/paddledetection/quantize/cpp/README.md
+++ b/examples/vision/detection/paddledetection/quantize/cpp/README.md
@@ -1,36 +1,37 @@
-# PP-YOLOE-l量化模型 C++部署示例
+English | [简体中文](README_CN.md)
+# PP-YOLOE-l Quantitative Model C++ Deployment Example
-本目录下提供的`infer_ppyoloe.cc`,可以帮助用户快速完成PP-YOLOE-l量化模型在CPU/GPU上的部署推理加速.
+`infer_ppyoloe.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE-l quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment..
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的PP-YOLOE-l模型为例, 进行部署。支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
-在本目录执行如下命令即可完成编译,以及量化模型部署.
+## Take the Quantized PP-YOLOE-l Model as an example for Deployment, FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0)
+Run the following commands in this directory to compile and deploy the quantized model.
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
+# Download the ppyoloe_crn_l_300e_coco quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 0
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 1
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 2
```
diff --git a/examples/vision/detection/paddledetection/quantize/cpp/README_CN.md b/examples/vision/detection/paddledetection/quantize/cpp/README_CN.md
new file mode 100644
index 0000000000..d174c3dcdc
--- /dev/null
+++ b/examples/vision/detection/paddledetection/quantize/cpp/README_CN.md
@@ -0,0 +1,37 @@
+[English](README.md) | 简体中文
+# PP-YOLOE-l量化模型 C++部署示例
+
+本目录下提供的`infer_ppyoloe.cc`,可以帮助用户快速完成PP-YOLOE-l量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+## 以量化后的PP-YOLOE-l模型为例, 进行部署。支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+在本目录执行如下命令即可完成编译,以及量化模型部署.
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
+tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# 在CPU上使用ONNX Runtime推理量化模型
+./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 0
+# 在GPU上使用TensorRT推理量化模型
+./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 1
+# 在GPU上使用Paddle-TensorRT推理量化模型
+./infer_ppyoloe_demo ppyoloe_crn_l_300e_coco_qat 000000014439.jpg 2
+```
diff --git a/examples/vision/detection/paddledetection/quantize/python/README.md b/examples/vision/detection/paddledetection/quantize/python/README.md
index 15e0d463e0..de3bb5f71c 100755
--- a/examples/vision/detection/paddledetection/quantize/python/README.md
+++ b/examples/vision/detection/paddledetection/quantize/python/README.md
@@ -1,31 +1,32 @@
-# PP-YOLOE-l量化模型 Python部署示例
-本目录下提供的`infer.py`,可以帮助用户快速完成PP-YOLOE量化模型在CPU/GPU上的部署推理加速.
+English | [简体中文](README_CN.md)
+# PP-YOLOE-l Quantitative Model Python Deployment Example
+ `infer.py` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
-## 以量化后的PP-YOLOE-l模型为例, 进行部署
+## Take the Quantized PP-YOLOE-l Model as an example for Deployment
```bash
-#下载部署示例代码
+# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd /examples/vision/detection/paddledetection/quantize/python
-#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
+# Download the ppyoloe_crn_l_300e_coco quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device cpu --backend ort
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend trt
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend pptrt
```
diff --git a/examples/vision/detection/paddledetection/quantize/python/README_CN.md b/examples/vision/detection/paddledetection/quantize/python/README_CN.md
new file mode 100644
index 0000000000..2d228f0d0d
--- /dev/null
+++ b/examples/vision/detection/paddledetection/quantize/python/README_CN.md
@@ -0,0 +1,32 @@
+[English](README.md) | 简体中文
+# PP-YOLOE-l量化模型 Python部署示例
+本目录下提供的`infer.py`,可以帮助用户快速完成PP-YOLOE量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.(注意: 推理量化后的分类模型仍然需要FP32模型文件夹下的infer_cfg.yml文件, 自行量化的模型文件夹内不包含此yaml文件, 用户从FP32模型文件夹下复制此yaml文件到量化后的模型文件夹内即可.)
+
+
+## 以量化后的PP-YOLOE-l模型为例, 进行部署
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd /examples/vision/detection/paddledetection/quantize/python
+
+#下载FastDeloy提供的ppyoloe_crn_l_300e_coco量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/ppyoloe_crn_l_300e_coco_qat.tar
+tar -xvf ppyoloe_crn_l_300e_coco_qat.tar
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# 在CPU上使用ONNX Runtime推理量化模型
+python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device cpu --backend ort
+# 在GPU上使用TensorRT推理量化模型
+python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend trt
+# 在GPU上使用Paddle-TensorRT推理量化模型
+python infer_ppyoloe.py --model ppyoloe_crn_l_300e_coco_qat --image 000000014439.jpg --device gpu --backend pptrt
+```
diff --git a/examples/vision/detection/paddledetection/rknpu2/README.md b/examples/vision/detection/paddledetection/rknpu2/README.md
index 1476952e46..38b28c46f1 100644
--- a/examples/vision/detection/paddledetection/rknpu2/README.md
+++ b/examples/vision/detection/paddledetection/rknpu2/README.md
@@ -8,9 +8,8 @@ Now FastDeploy supports the deployment of the following models
## Prepare PaddleDetection deployment models and convert models
Before RKNPU deployment, you need to transform Paddle model to RKNN model:
-* From Paddle dynamic map to ONNX model, refer to [PaddleDetection Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md)
- , and set **export.nms=True** during transformation.
-* From ONNX model to RKNN model, refer to [Transformation Document](../../../../../docs/cn/faq/rknpu2/export.md).
+* From Paddle dynamic map to ONNX model, refer to [PaddleDetection Model Export](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.4/deploy/EXPORT_MODEL.md), and set **export.nms=True** during transformation.
+* From ONNX model to RKNN model, refer to [Transformation Document](../../../../../docs/en/faq/rknpu2/export.md).
## Model Transformation Example
diff --git a/examples/vision/detection/paddledetection/rknpu2/cpp/README.md b/examples/vision/detection/paddledetection/rknpu2/cpp/README.md
index a87e64fd2f..56a7713d77 100644
--- a/examples/vision/detection/paddledetection/rknpu2/cpp/README.md
+++ b/examples/vision/detection/paddledetection/rknpu2/cpp/README.md
@@ -1,28 +1,29 @@
-# PaddleDetection C++部署示例
+English | [简体中文](README_CN.md)
+# PaddleDetection Deployment Examples for C++
-本目录下提供`infer_picodet.cc`快速完成PPDetection模型在Rockchip板子上上通过二代NPU加速部署的示例。
+`infer_picodet.cc` in this directory provides an example of quickly completing the PPDetection model on Rockchip boards for accelerated deployment via second-generation NPUs.
-在部署前,需确认以下两个步骤:
+Before deployment, the following two steps need to be confirmed:
-1. 软硬件环境满足要求
-2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+1. Hardware and software environment meets the requirements.
+2. Download the pre-compiled deployment repository or compile the FastDeploy repository from scratch according to the development environment.
-以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+For the above steps, please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md).
-## 生成基本目录文件
+## Generate Basic Directory Files
-该例程由以下几个部分组成
+The routine consists of the following parts:
```text
.
├── CMakeLists.txt
-├── build # 编译文件夹
-├── image # 存放图片的文件夹
+├── build # Compile Folder
+├── image # Folder for images
├── infer_picodet.cc
-├── model # 存放模型文件的文件夹
-└── thirdpartys # 存放sdk的文件夹
+├── model # Folder for models
+└── thirdpartys # Folder for sdk
```
-首先需要先生成目录结构
+First, please build a directory structure
```bash
mkdir build
mkdir images
@@ -30,24 +31,23 @@ mkdir model
mkdir thirdpartys
```
-## 编译
+## Compile
-### 编译并拷贝SDK到thirdpartys文件夹
+### Compile and Copy SDK to folder thirdpartys
-请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成
-fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+Please refer to [How to Build RKNPU2 Deployment Environment](../../../../../../docs/en/build_and_install/rknpu2.md) to compile SDK.After compiling, the fastdeploy-0.0.3 directory will be created in the build directory, please move it to the thirdpartys directory.
-### 拷贝模型文件,以及配置文件至model文件夹
-在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
-转换为RKNN后的模型文件也需要拷贝至model。
+### Copy model and configuration files to folder Model
+In the process of Paddle dynamic map model -> Paddle static map model -> ONNX mdoel, ONNX file and the corresponding yaml configuration file will be generated. Please move the configuration file to the folder model.
+After converting to RKNN, the model file also needs to be copied to folder model.
-### 准备测试图片至image文件夹
+### Prepare Test Images to folder image
```bash
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
cp 000000014439.jpg ./images
```
-### 编译example
+### Compile example
```bash
cd build
@@ -56,7 +56,7 @@ make -j8
make install
```
-## 运行例程
+## Running Routines
```bash
cd ./build/install
@@ -64,6 +64,6 @@ cd ./build/install
```
-- [模型介绍](../../)
-- [Python部署](../python)
-- [视觉模型预测结果](../../../../../../docs/api/vision_results/)
+- [Model Description](../../)
+- [Python Deployment](../python)
+- [Vision model prediction results](../../../../../../docs/api/vision_results/)
diff --git a/examples/vision/detection/paddledetection/rknpu2/cpp/README_CN.md b/examples/vision/detection/paddledetection/rknpu2/cpp/README_CN.md
new file mode 100644
index 0000000000..d58a092aed
--- /dev/null
+++ b/examples/vision/detection/paddledetection/rknpu2/cpp/README_CN.md
@@ -0,0 +1,69 @@
+[English](README.md) | 简体中文
+# PaddleDetection C++部署示例
+
+本目录下提供`infer_picodet.cc`快速完成PPDetection模型在Rockchip板子上上通过二代NPU加速部署的示例。
+
+在部署前,需确认以下两个步骤:
+
+1. 软硬件环境满足要求
+2. 根据开发环境,下载预编译部署库或者从头编译FastDeploy仓库
+
+以上步骤请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)实现
+
+## 生成基本目录文件
+
+该例程由以下几个部分组成
+```text
+.
+├── CMakeLists.txt
+├── build # 编译文件夹
+├── image # 存放图片的文件夹
+├── infer_picodet.cc
+├── model # 存放模型文件的文件夹
+└── thirdpartys # 存放sdk的文件夹
+```
+
+首先需要先生成目录结构
+```bash
+mkdir build
+mkdir images
+mkdir model
+mkdir thirdpartys
+```
+
+## 编译
+
+### 编译并拷贝SDK到thirdpartys文件夹
+
+请参考[RK2代NPU部署库编译](../../../../../../docs/cn/build_and_install/rknpu2.md)仓库编译SDK,编译完成后,将在build目录下生成fastdeploy-0.0.3目录,请移动它至thirdpartys目录下.
+
+### 拷贝模型文件,以及配置文件至model文件夹
+在Paddle动态图模型 -> Paddle静态图模型 -> ONNX模型的过程中,将生成ONNX文件以及对应的yaml配置文件,请将配置文件存放到model文件夹内。
+转换为RKNN后的模型文件也需要拷贝至model。
+
+### 准备测试图片至image文件夹
+```bash
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+cp 000000014439.jpg ./images
+```
+
+### 编译example
+
+```bash
+cd build
+cmake ..
+make -j8
+make install
+```
+
+## 运行例程
+
+```bash
+cd ./build/install
+./infer_picodet model/picodet_s_416_coco_lcnet images/000000014439.jpg
+```
+
+
+- [模型介绍](../../)
+- [Python部署](../python)
+- [视觉模型预测结果](../../../../../../docs/api/vision_results/)
diff --git a/examples/vision/detection/paddledetection/rknpu2/python/README.md b/examples/vision/detection/paddledetection/rknpu2/python/README.md
index f191063f08..6ba45c398d 100644
--- a/examples/vision/detection/paddledetection/rknpu2/python/README.md
+++ b/examples/vision/detection/paddledetection/rknpu2/python/README.md
@@ -1,35 +1,35 @@
-# PaddleDetection Python部署示例
+English | [简体中文](README_CN.md)
+# PaddleDetection Deployment Examples for Python
-在部署前,需确认以下两个步骤
+Before deployment, the following step need to be confirmed:
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+- 1. Hardware and software environment meets the requirements, please refer to [Environment Requirements for FastDeploy](../../../../../../docs/en/build_and_install/rknpu2.md)
-本目录下提供`infer.py`快速完成Picodet在RKNPU上部署的示例。执行如下脚本即可完成
+This directory provides `infer.py` for a quick example of Picodet deployment on RKNPU. This can be done by running the following script.
```bash
-# 下载部署示例代码
+# Download the deploying demo code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy/examples/vision/detection/paddledetection/rknpu2/python
-# 下载图片
+# Download images.
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
# copy model
cp -r ./picodet_s_416_coco_lcnet /path/to/FastDeploy/examples/vision/detection/rknpu2detection/paddledetection/python
-# 推理
+# Inference.
python3 infer.py --model_file ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet_rk3568.rknn \
--config_file ./picodet_s_416_coco_lcnet/infer_cfg.yml \
--image 000000014439.jpg
```
-## 注意事项
-RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,
-需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
-## 其它文档
+## Notes
+The input requirement for the model on RKNPU is to use NHWC format, and image normalization will be embedded into the model when converting the RKNN model, so we need to call DisableNormalizePermute(C++) or `disable_normalize_permute(Python) first when deploying with FastDeploy to disable normalization and data format conversion in the preprocessing stage.
+## Other Documents
-- [PaddleDetection 模型介绍](..)
-- [PaddleDetection C++部署](../cpp)
-- [模型预测结果说明](../../../../../../docs/api/vision_results/)
-- [转换PaddleDetection RKNN模型文档](../README.md)
+- [PaddleDetection Model Description](..)
+- [PaddleDetection C++ Deployment](../cpp)
+- [Description of the prediction](../../../../../../docs/api/vision_results/)
+- [Converting PaddleDetection RKNN model](../README.md)
diff --git a/examples/vision/detection/paddledetection/rknpu2/python/README_CN.md b/examples/vision/detection/paddledetection/rknpu2/python/README_CN.md
new file mode 100644
index 0000000000..e80a29fc49
--- /dev/null
+++ b/examples/vision/detection/paddledetection/rknpu2/python/README_CN.md
@@ -0,0 +1,35 @@
+[English](README.md) | 简体中文
+# PaddleDetection Python部署示例
+
+在部署前,需确认以下步骤
+
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/rknpu2.md)
+
+本目录下提供`infer.py`快速完成Picodet在RKNPU上部署的示例。执行如下脚本即可完成
+
+```bash
+# 下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/paddledetection/rknpu2/python
+
+# 下载图片
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# copy model
+cp -r ./picodet_s_416_coco_lcnet /path/to/FastDeploy/examples/vision/detection/rknpu2detection/paddledetection/python
+
+# 推理
+python3 infer.py --model_file ./picodet_s_416_coco_lcnet/picodet_s_416_coco_lcnet_rk3568.rknn \
+ --config_file ./picodet_s_416_coco_lcnet/infer_cfg.yml \
+ --image 000000014439.jpg
+```
+
+
+## 注意事项
+RKNPU上对模型的输入要求是使用NHWC格式,且图片归一化操作会在转RKNN模型时,内嵌到模型中,因此我们在使用FastDeploy部署时,需要先调用DisableNormalizePermute(C++)或`disable_normalize_permute(Python),在预处理阶段禁用归一化以及数据格式的转换。
+## 其它文档
+
+- [PaddleDetection 模型介绍](..)
+- [PaddleDetection C++部署](../cpp)
+- [模型预测结果说明](../../../../../../docs/api/vision_results/)
+- [转换PaddleDetection RKNN模型文档](../README.md)
diff --git a/examples/vision/detection/paddledetection/rv1126/cpp/README.md b/examples/vision/detection/paddledetection/rv1126/cpp/README.md
index c662ecb440..a78988a32b 100755
--- a/examples/vision/detection/paddledetection/rv1126/cpp/README.md
+++ b/examples/vision/detection/paddledetection/rv1126/cpp/README.md
@@ -1,29 +1,30 @@
-# PP-YOLOE 量化模型 C++ 部署示例
+English | [简体中文](README_CN.md)
+# PP-YOLOE Quantitative Model C++ Deployment Example
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 RV1126 上的部署推理加速。
+`infer.cc` in this directory can help you quickly complete the inference acceleration of PP-YOLOE quantization model deployment on RV1126.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [Preparations for FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/rv1126.md#Cross-compilation-environment-construction).
-### 模型准备
-1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
-2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
-3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
-4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+### Model Preparations
+1. You can directly use the quantized model provided by FastDeploy for deployment.
+2. You can use PaddleDetection to export Float32 models, note that you need to set the parameter when exporting model: use_shared_conv=False. For more information: [PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe).
+3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.(Note: The quantized classification model still needs the infer_cfg.yml file in the FP32 model folder. Self-quantized model folder does not contain this yaml file, you can copy it from the FP32 model folder to the quantized model folder.)
+4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md)
-## 在 RV1126 上部署量化后的 PP-YOLOE 检测模型
-请按照以下步骤完成在 RV1126 上部署 PP-YOLOE 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized PP-YOLOE Detection model on RV1126
+Please follow these steps to complete the deployment of the PP-YOLOE quantization model on RV1126.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/rv1126.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
mkdir models && mkdir images
@@ -34,26 +35,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
cp -r 000000014439.jpg images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 PP-YOLOE 检测模型到 Rockchip RV1126,可使用如下命令:
+5. Deploy the PP-YOLOE detection model to Rockchip RV1126 based on adb. You can run the following lines:
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
```
-部署成功后运行结果如下:
+The output is:
-需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on RV1126 needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
diff --git a/examples/vision/detection/paddledetection/rv1126/cpp/README_CN.md b/examples/vision/detection/paddledetection/rv1126/cpp/README_CN.md
new file mode 100644
index 0000000000..5a0a330957
--- /dev/null
+++ b/examples/vision/detection/paddledetection/rv1126/cpp/README_CN.md
@@ -0,0 +1,60 @@
+[English](README.md) | 简体中文
+# PP-YOLOE 量化模型 C++ 部署示例
+
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 PP-YOLOE 量化模型在 RV1126 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/rv1126.md#交叉编译环境搭建)
+
+### 模型准备
+1. 用户可以直接使用由 FastDeploy 提供的量化模型进行部署。
+2. 用户可以先使用 PaddleDetection 自行导出 Float32 模型,注意导出模型模型时设置参数:use_shared_conv=False,更多细节请参考:[PP-YOLOE](https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.4/configs/ppyoloe)
+3. 用户可以使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。(注意: 推理量化后的检测模型仍然需要FP32模型文件夹下的 infer_cfg.yml 文件,自行量化的模型文件夹内不包含此 yaml 文件,用户从 FP32 模型文件夹下复制此yaml文件到量化后的模型文件夹内即可。)
+4. 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 RV1126 上部署量化后的 PP-YOLOE 检测模型
+请按照以下步骤完成在 RV1126 上部署 PP-YOLOE 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/rv1126.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
+mkdir models && mkdir images
+wget https://bj.bcebos.com/fastdeploy/models/ppyoloe_noshare_qat.tar.gz
+tar -xvf ppyoloe_noshare_qat.tar.gz
+cp -r ppyoloe_noshare_qat models
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+cp -r 000000014439.jpg images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=armhf ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 PP-YOLOE 检测模型到 Rockchip RV1126,可使用如下命令:
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/detection/paddledetection/rv1126/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo ppyoloe_noshare_qat 000000014439.jpg $DEVICE_ID
+```
+
+部署成功后运行结果如下:
+
+
+
+需要特别注意的是,在 RV1126 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/detection/paddledetection/serving/README.md b/examples/vision/detection/paddledetection/serving/README.md
index ec1571736a..0957333ad4 100644
--- a/examples/vision/detection/paddledetection/serving/README.md
+++ b/examples/vision/detection/paddledetection/serving/README.md
@@ -7,7 +7,7 @@ For PaddleDetection model export and download of pre-trained models, refer to [P
Confirm before the serving deployment
-- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README_CN.md) for software and hardware environment requirements and image pull commands
+- 1. Refer to [FastDeploy Serving Deployment](../../../../../serving/README.md) for software and hardware environment requirements and image pull commands
## Start Service
@@ -92,4 +92,4 @@ output_name: DET_RESULT
## Configuration Change
-The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/zh_CN/model_configuration.md) for more information.
+The current default configuration runs on GPU. If you want to run it on CPU or other inference engines, please modify the configuration in `models/runtime/config.pbtxt`. Refer to [Configuration Document](../../../../../serving/docs/EN/model_configuration-en.md) for more information.
diff --git a/examples/vision/detection/scaledyolov4/README.md b/examples/vision/detection/scaledyolov4/README.md
index 9ed068f732..2256db66eb 100644
--- a/examples/vision/detection/scaledyolov4/README.md
+++ b/examples/vision/detection/scaledyolov4/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
- The ScaledYOLOv4 deployment is based on the code of [ScaledYOLOv4](https://github.com/WongKinYiu/ScaledYOLOv4) and [Pre-trained Model on COCO](https://github.com/WongKinYiu/ScaledYOLOv4).
- - (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/ScaledYOLOv4) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment;
- - (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/ScaledYOLOv4) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment;
+ - (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
## Export the ONNX Model
diff --git a/examples/vision/detection/scaledyolov4/cpp/README.md b/examples/vision/detection/scaledyolov4/cpp/README.md
index e87b875079..240d962634 100644
--- a/examples/vision/detection/scaledyolov4/cpp/README.md
+++ b/examples/vision/detection/scaledyolov4/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -37,7 +37,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## ScaledYOLOv4 C++ Interface
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/scaledyolov4/python/README.md b/examples/vision/detection/scaledyolov4/python/README.md
index ff67320e39..29bec3ee86 100644
--- a/examples/vision/detection/scaledyolov4/python/README.md
+++ b/examples/vision/detection/scaledyolov4/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of ScaledYOLOv4 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
@@ -79,4 +79,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [ScaledYOLOv4 Model Description](..)
- [ScaledYOLOv4 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolor/README.md b/examples/vision/detection/yolor/README.md
index f3b966b8b2..09972feda5 100644
--- a/examples/vision/detection/yolor/README.md
+++ b/examples/vision/detection/yolor/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
- The YOLOR deployment is based on the code of [YOLOR](https://github.com/WongKinYiu/yolor/releases/tag/weights) and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolor/releases/tag/weights).
- - (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/yolor/releases/tag/weights) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment. The *.pose model’s deployment is not supported;
- - (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (1)The *.pt provided by [Official Repository](https://github.com/WongKinYiu/yolor/releases/tag/weights) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The *.pose model’s deployment is not supported;
+ - (2)The ScaledYOLOv4 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
## Export the ONNX Model
diff --git a/examples/vision/detection/yolor/cpp/README.md b/examples/vision/detection/yolor/cpp/README.md
index d410fecb61..7f34f71f07 100644
--- a/examples/vision/detection/yolor/cpp/README.md
+++ b/examples/vision/detection/yolor/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -37,7 +37,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOR C++ Interface
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolor/python/README.md b/examples/vision/detection/yolor/python/README.md
index 68e423742a..4059972b6f 100644
--- a/examples/vision/detection/yolor/python/README.md
+++ b/examples/vision/detection/yolor/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` ast finishes the deployment of YOLOR on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
```bash
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOR Model Description](..)
- [YOLOR C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov5/a311d/cpp/README.md b/examples/vision/detection/yolov5/a311d/cpp/README.md
index 2271af4350..a1e0c78f48 100755
--- a/examples/vision/detection/yolov5/a311d/cpp/README.md
+++ b/examples/vision/detection/yolov5/a311d/cpp/README.md
@@ -1,45 +1,46 @@
-# YOLOv5 量化模型 C++ 部署示例
+English | [简体中文](README_CN.md)
+# YOLOv5 Quantitative Model C++ Deployment Example
-本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
+`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5 quantization model deployment on A311D.
-## 部署准备
-### FastDeploy 交叉编译环境准备
-1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+## Deployment Preparations
+### FastDeploy Cross-compile Environment Preparations
+1. For the software and hardware environment, and the cross-compile environment, please refer to [FastDeploy Cross-compile environment](../../../../../../docs/en/build_and_install/a311d.md#Cross-compilation-environment-construction).
-### 量化模型准备
-可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
-1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
+### Model Preparations
+The quantified model can be deployed directly using the model provided by FastDeploy, or you can prepare it as follows:
+1. Export ONNX model according to the official [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) export method, or you can download it directly with the following command:
```bash
wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
```
-2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
+2. Prepare about 300 images for quantification, or you can use the following command to download the data we have prepared.
```bash
wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
tar -xf COCO_val_320.tar.gz
```
-3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
+3. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
```bash
fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
```
-4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+4. The model requires heterogeneous computation. Please refer to: [Heterogeneous Computation](./../../../../../../docs/en/faq/heterogeneous_computing_on_timvx_npu.md). Since the YOLOv5 model is already provided, you can test the heterogeneous file we provide first to verify whether the accuracy meets the requirements.
```bash
-# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
+# First download the model we provide, unzip it and copy the subgraph.txt file to the newly quantized model directory.
wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
tar -xvf yolov5s_ptq_model.tar.gz
```
-更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+For more information, please refer to [Model Quantization](../../quantize/README.md)
-## 在 A311D 上部署量化后的 YOLOv5 检测模型
-请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
-1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+## Deploying the Quantized YOLOv5 Detection model on A311D
+Please follow these steps to complete the deployment of the YOLOv5 quantization model on A311D.
+1. Cross-compile the FastDeploy library as described in [Cross-compile FastDeploy](../../../../../../docs/en/build_and_install/a311d.md#FastDeploy-cross-compilation-library-compilation-based-on-Paddle-Lite)
-2. 将编译后的库拷贝到当前目录,可使用如下命令:
+2. Copy the compiled library to the current directory. You can run this line:
```bash
cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
```
-3. 在当前路径下载部署所需的模型和示例图片:
+3. Download the model and example images required for deployment in current path.
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir models && mkdir images
@@ -50,26 +51,26 @@ wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/0000000
cp -r 000000014439.jpg images
```
-4. 编译部署示例,可使入如下命令:
+4. Compile the deployment example. You can run the following lines:
```bash
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
mkdir build && cd build
cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
make -j8
make install
-# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+# After success, an install folder will be created with a running demo and libraries required for deployment.
```
-5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
+5. Deploy the YOLOv5 detection model to A311D based on adb.
```bash
-# 进入 install 目录
+# Go to the install directory.
cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
-# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+# The following line represents: bash run_with_adb.sh, demo needed to run, model path, image path, DEVICE ID.
bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
```
-部署成功后,vis_result.jpg 保存的结果如下:
+The result vis_result.jpg is saveed as follows:
-需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
+Please note that the model deployed on A311D needs to be quantized. You can refer to [Model Quantization](../../../../../../docs/en/quantize.md)
diff --git a/examples/vision/detection/yolov5/a311d/cpp/README_CN.md b/examples/vision/detection/yolov5/a311d/cpp/README_CN.md
new file mode 100644
index 0000000000..4cb0d7380d
--- /dev/null
+++ b/examples/vision/detection/yolov5/a311d/cpp/README_CN.md
@@ -0,0 +1,76 @@
+[English](README.md) | 简体中文
+# YOLOv5 量化模型 C++ 部署示例
+
+本目录下提供的 `infer.cc`,可以帮助用户快速完成 YOLOv5 量化模型在 A311D 上的部署推理加速。
+
+## 部署准备
+### FastDeploy 交叉编译环境准备
+1. 软硬件环境满足要求,以及交叉编译环境的准备,请参考:[FastDeploy 交叉编译环境准备](../../../../../../docs/cn/build_and_install/a311d.md#交叉编译环境搭建)
+
+### 量化模型准备
+可以直接使用由 FastDeploy 提供的量化模型进行部署,也可以按照如下步骤准备量化模型:
+1. 按照 [YOLOv5](https://github.com/ultralytics/yolov5/releases/tag/v6.1) 官方导出方式导出 ONNX 模型,或者直接使用如下命令下载
+```bash
+wget https://paddle-slim-models.bj.bcebos.com/act/yolov5s.onnx
+```
+2. 准备 300 张左右量化用的图片,也可以使用如下命令下载我们准备好的数据。
+```bash
+wget https://bj.bcebos.com/fastdeploy/models/COCO_val_320.tar.gz
+tar -xf COCO_val_320.tar.gz
+```
+3. 使用 FastDeploy 提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署。
+```bash
+fastdeploy compress --config_path=./configs/detection/yolov5s_quant.yaml --method='PTQ' --save_dir='./yolov5s_ptq_model_new/'
+```
+4. YOLOv5 模型需要异构计算,异构计算文件可以参考:[异构计算](./../../../../../../docs/cn/faq/heterogeneous_computing_on_timvx_npu.md),由于 FastDeploy 已经提供了 YOLOv5 模型,可以先测试我们提供的异构文件,验证精度是否符合要求。
+```bash
+# 先下载我们提供的模型,解压后将其中的 subgraph.txt 文件拷贝到新量化的模型目录中
+wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
+tar -xvf yolov5s_ptq_model.tar.gz
+```
+
+更多量化相关相关信息可查阅[模型量化](../../quantize/README.md)
+
+## 在 A311D 上部署量化后的 YOLOv5 检测模型
+请按照以下步骤完成在 A311D 上部署 YOLOv5 量化模型:
+1. 交叉编译编译 FastDeploy 库,具体请参考:[交叉编译 FastDeploy](../../../../../../docs/cn/build_and_install/a311d.md#基于-paddlelite-的-fastdeploy-交叉编译库编译)
+
+2. 将编译后的库拷贝到当前目录,可使用如下命令:
+```bash
+cp -r FastDeploy/build/fastdeploy-timvx/ FastDeploy/examples/vision/detection/yolov5/a311d/cpp
+```
+
+3. 在当前路径下载部署所需的模型和示例图片:
+```bash
+cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
+mkdir models && mkdir images
+wget https://bj.bcebos.com/fastdeploy/models/yolov5s_ptq_model.tar.gz
+tar -xvf yolov5s_ptq_model.tar.gz
+cp -r yolov5s_ptq_model models
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+cp -r 000000014439.jpg images
+```
+
+4. 编译部署示例,可使入如下命令:
+```bash
+cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp
+mkdir build && cd build
+cmake -DCMAKE_TOOLCHAIN_FILE=${PWD}/../fastdeploy-timvx/toolchain.cmake -DFASTDEPLOY_INSTALL_DIR=${PWD}/../fastdeploy-timvx -DTARGET_ABI=arm64 ..
+make -j8
+make install
+# 成功编译之后,会生成 install 文件夹,里面有一个运行 demo 和部署所需的库
+```
+
+5. 基于 adb 工具部署 YOLOv5 检测模型到晶晨 A311D
+```bash
+# 进入 install 目录
+cd FastDeploy/examples/vision/detection/yolov5/a311d/cpp/build/install/
+# 如下命令表示:bash run_with_adb.sh 需要运行的demo 模型路径 图片路径 设备的DEVICE_ID
+bash run_with_adb.sh infer_demo yolov5s_ptq_model 000000014439.jpg $DEVICE_ID
+```
+
+部署成功后,vis_result.jpg 保存的结果如下:
+
+
+
+需要特别注意的是,在 A311D 上部署的模型需要是量化后的模型,模型的量化请参考:[模型量化](../../../../../../docs/cn/quantize.md)
diff --git a/examples/vision/detection/yolov5/cpp/README.md b/examples/vision/detection/yolov5/cpp/README.md
index 39e105cd9e..1b5e9ad868 100755
--- a/examples/vision/detection/yolov5/cpp/README.md
+++ b/examples/vision/detection/yolov5/cpp/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
This directory provides examples that `infer.cc` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT.
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeployPrecompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -104,4 +104,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov5/python/README.md b/examples/vision/detection/yolov5/python/README.md
index b1c56a0541..0e815dd091 100755
--- a/examples/vision/detection/yolov5/python/README.md
+++ b/examples/vision/detection/yolov5/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -82,4 +82,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv5 Model Description](..)
- [YOLOv5 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov5/python/serving/README.md b/examples/vision/detection/yolov5/python/serving/README.md
deleted file mode 120000
index bacd3186b4..0000000000
--- a/examples/vision/detection/yolov5/python/serving/README.md
+++ /dev/null
@@ -1 +0,0 @@
-README_CN.md
\ No newline at end of file
diff --git a/examples/vision/detection/yolov5/python/serving/README.md b/examples/vision/detection/yolov5/python/serving/README.md
new file mode 100644
index 0000000000..b0cb92244d
--- /dev/null
+++ b/examples/vision/detection/yolov5/python/serving/README.md
@@ -0,0 +1,36 @@
+English | [简体中文](README_CN.md)
+
+# YOLOv5 Python Simple Serving Demo
+
+
+## Environment
+
+- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+
+Server:
+```bash
+# Download demo code
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/yolov5/python/serving
+
+# Download model
+wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
+tar xvf yolov5s_infer.tar
+
+# Launch server, change the configurations in server.py to select hardware, backend, etc.
+# and use --host, --port to specify IP and port
+fastdeploy simple_serving --app server:app
+```
+
+Client:
+```bash
+# Download demo code
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/detection/yolov5/python/serving
+
+# Download test image
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# Send request and get inference result (Please adapt the IP and port if necessary)
+python client.py
+```
diff --git a/examples/vision/detection/yolov5/python/serving/README_CN.md b/examples/vision/detection/yolov5/python/serving/README_CN.md
index 28963fd3f9..b8b98fcf46 100644
--- a/examples/vision/detection/yolov5/python/serving/README_CN.md
+++ b/examples/vision/detection/yolov5/python/serving/README_CN.md
@@ -1,4 +1,4 @@
-简体中文 | [English](README_EN.md)
+简体中文 | [English](README.md)
# YOLOv5 Python轻量服务化部署示例
diff --git a/examples/vision/detection/yolov5/python/serving/README_EN.md b/examples/vision/detection/yolov5/python/serving/README_EN.md
deleted file mode 100644
index b0cb92244d..0000000000
--- a/examples/vision/detection/yolov5/python/serving/README_EN.md
+++ /dev/null
@@ -1,36 +0,0 @@
-English | [简体中文](README_CN.md)
-
-# YOLOv5 Python Simple Serving Demo
-
-
-## Environment
-
-- 1. Prepare environment and install FastDeploy Python whl, refer to [download_prebuilt_libraries](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
-
-Server:
-```bash
-# Download demo code
-git clone https://github.com/PaddlePaddle/FastDeploy.git
-cd FastDeploy/examples/vision/detection/yolov5/python/serving
-
-# Download model
-wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_infer.tar
-tar xvf yolov5s_infer.tar
-
-# Launch server, change the configurations in server.py to select hardware, backend, etc.
-# and use --host, --port to specify IP and port
-fastdeploy simple_serving --app server:app
-```
-
-Client:
-```bash
-# Download demo code
-git clone https://github.com/PaddlePaddle/FastDeploy.git
-cd FastDeploy/examples/vision/detection/yolov5/python/serving
-
-# Download test image
-wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-
-# Send request and get inference result (Please adapt the IP and port if necessary)
-python client.py
-```
diff --git a/examples/vision/detection/yolov5/quantize/cpp/README.md b/examples/vision/detection/yolov5/quantize/cpp/README.md
index baee6d351d..193cebf6ba 100755
--- a/examples/vision/detection/yolov5/quantize/cpp/README.md
+++ b/examples/vision/detection/yolov5/quantize/cpp/README.md
@@ -1,37 +1,38 @@
-# YOLOv5量化模型 C++部署示例
+English | [简体中文](README_CN.md)
+# YOLOv5 Quantitative Model C++ Deployment Example
-本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
+`infer.cc` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment.
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
-## 以量化后的YOLOv5s模型为例, 进行部署
-在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+## Take the Quantized YOLOv5s Model as an example for Deployment
+Run the following commands in this directory to compile and deploy the quantized model. FastDeploy version 0.7.0 or higher is required (x.x.x>=0.7.0).
```bash
mkdir build
cd build
-# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+# Download pre-compiled FastDeploy libraries. You can choose the appropriate version from `pre-compiled FastDeploy libraries` mentioned above.
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
tar xvf fastdeploy-linux-x64-x.x.x.tgz
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
make -j
-#下载FastDeloy提供的yolov5s量化模型文件和测试图片
+# Download the yolov5s quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
./infer_demo yolov5s_quant 000000014439.jpg 0
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
./infer_demo yolov5s_quant 000000014439.jpg 1
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
./infer_demo yolov5s_quant 000000014439.jpg 2
```
diff --git a/examples/vision/detection/yolov5/quantize/cpp/README_CN.md b/examples/vision/detection/yolov5/quantize/cpp/README_CN.md
new file mode 100644
index 0000000000..e25b0239fe
--- /dev/null
+++ b/examples/vision/detection/yolov5/quantize/cpp/README_CN.md
@@ -0,0 +1,38 @@
+[English](README.md) | 简体中文
+# YOLOv5量化模型 C++部署示例
+
+本目录下提供的`infer.cc`,可以帮助用户快速完成YOLOv5s量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
+
+## 以量化后的YOLOv5s模型为例, 进行部署
+在本目录执行如下命令即可完成编译,以及量化模型部署.支持此模型需保证FastDeploy版本0.7.0以上(x.x.x>=0.7.0)
+```bash
+mkdir build
+cd build
+# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
+wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz
+tar xvf fastdeploy-linux-x64-x.x.x.tgz
+cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x
+make -j
+
+#下载FastDeloy提供的yolov5s量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
+tar -xvf yolov5s_quant.tar
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+
+# 在CPU上使用ONNX Runtime推理量化模型
+./infer_demo yolov5s_quant 000000014439.jpg 0
+# 在GPU上使用TensorRT推理量化模型
+./infer_demo yolov5s_quant 000000014439.jpg 1
+# 在GPU上使用Paddle-TensorRT推理量化模型
+./infer_demo yolov5s_quant 000000014439.jpg 2
+```
diff --git a/examples/vision/detection/yolov5/quantize/python/README.md b/examples/vision/detection/yolov5/quantize/python/README.md
index 9108e256e6..0cd8ae5687 100755
--- a/examples/vision/detection/yolov5/quantize/python/README.md
+++ b/examples/vision/detection/yolov5/quantize/python/README.md
@@ -1,31 +1,32 @@
+English | [简体中文](README_CN.md)
# YOLOv5s量化模型 Python部署示例
-本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
+`infer.py` in this directory can help you quickly complete the inference acceleration of YOLOv5s quantization model deployment on CPU/GPU.
-## 部署准备
-### FastDeploy环境准备
-- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+## Deployment Preparations
+### FastDeploy Environment Preparations
+- 1. For the software and hardware requirements, please refer to [FastDeploy Environment Requirements](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
+- 2. For the installation of FastDeploy Python whl package, please refer to [FastDeploy Python Installation](../../../../../../docs/en/build_and_install/download_prebuilt_libraries.md).
-### 量化模型准备
-- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
-- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
+### Quantized Model Preparations
+- 1. You can directly use the quantized model provided by FastDeploy for deployment..
+- 2. You can use [one-click automatical compression tool](../../../../../../tools/common_tools/auto_compression/) provided by FastDeploy to quantize model by yourself, and use the generated quantized model for deployment.
-## 以量化后的YOLOv5s模型为例, 进行部署
+## Take the Quantized YOLOv5s Model as an example for Deployment
```bash
-#下载部署示例代码
+# Download sample deployment code.
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd examples/vision/detection/yolov5/quantize/python
-#下载FastDeloy提供的yolov5s量化模型文件和测试图片
+# Download the yolov5s quantized model and test images provided by FastDeloy.
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
tar -xvf yolov5s_quant.tar
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
-# 在CPU上使用ONNX Runtime推理量化模型
+# Use ONNX Runtime inference quantization model on CPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
-# 在GPU上使用TensorRT推理量化模型
+# Use TensorRT inference quantization model on GPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
-# 在GPU上使用Paddle-TensorRT推理量化模型
+# Use Paddle-TensorRT inference quantization model on GPU.
python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
```
diff --git a/examples/vision/detection/yolov5/quantize/python/README_CN.md b/examples/vision/detection/yolov5/quantize/python/README_CN.md
new file mode 100644
index 0000000000..2556f90340
--- /dev/null
+++ b/examples/vision/detection/yolov5/quantize/python/README_CN.md
@@ -0,0 +1,32 @@
+[English](README.md) | 简体中文
+# YOLOv5s量化模型 Python部署示例
+本目录下提供的`infer.py`,可以帮助用户快速完成YOLOv5量化模型在CPU/GPU上的部署推理加速.
+
+## 部署准备
+### FastDeploy环境准备
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+
+### 量化模型准备
+- 1. 用户可以直接使用由FastDeploy提供的量化模型进行部署.
+- 2. 用户可以使用FastDeploy提供的[一键模型自动化压缩工具](../../../../../../tools/common_tools/auto_compression/),自行进行模型量化, 并使用产出的量化模型进行部署.
+
+
+## 以量化后的YOLOv5s模型为例, 进行部署
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd examples/vision/detection/yolov5/quantize/python
+
+#下载FastDeloy提供的yolov5s量化模型文件和测试图片
+wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov5s_quant.tar
+tar -xvf yolov5s_quant.tar
+wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000014439.jpg
+
+# 在CPU上使用ONNX Runtime推理量化模型
+python infer.py --model yolov5s_quant --image 000000014439.jpg --device cpu --backend ort
+# 在GPU上使用TensorRT推理量化模型
+python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend trt
+# 在GPU上使用Paddle-TensorRT推理量化模型
+python infer.py --model yolov5s_quant --image 000000014439.jpg --device gpu --backend pptrt
+```
diff --git a/examples/vision/detection/yolov5/serving/README.md b/examples/vision/detection/yolov5/serving/README.md
index 50500fa398..3f125013fa 100644
--- a/examples/vision/detection/yolov5/serving/README.md
+++ b/examples/vision/detection/yolov5/serving/README.md
@@ -55,4 +55,4 @@ output_name: detction_result
-The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/zh_CN/model_configuration.md) to modify the configs in `models/runtime/config.pbtxt`.
+The default is to run ONNXRuntime on CPU. If developers need to run it on GPU or other inference engines, please see the [Configs File](../../../../../serving/docs/EN/model_configuration-en.md) to modify the configs in `models/runtime/config.pbtxt`.
diff --git a/examples/vision/detection/yolov5lite/README.md b/examples/vision/detection/yolov5lite/README.md
index aa4343a3d7..8487cb93b8 100644
--- a/examples/vision/detection/yolov5lite/README.md
+++ b/examples/vision/detection/yolov5lite/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
- The YOLOv5Lite Deployment is based on the code of [YOLOv5-Lite](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)
and [Pre-trained Model Based on COCO](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4)。
- - (1)The *.pt provided by [Official Repository](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) should [Export the ONNX Model](#导出ONNX模型)to complete the deployment;
- - (2)The YOLOv5Lite model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (1)The *.pt provided by [Official Repository](https://github.com/ppogg/YOLOv5-Lite/releases/tag/v1.4) should [Export the ONNX Model](#Export-the-ONNX-Model)to complete the deployment;
+ - (2)The YOLOv5Lite model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
## Export the ONNX Model
diff --git a/examples/vision/detection/yolov5lite/cpp/README.md b/examples/vision/detection/yolov5lite/cpp/README.md
index 7edbc3b739..52b9b844a9 100644
--- a/examples/vision/detection/yolov5lite/cpp/README.md
+++ b/examples/vision/detection/yolov5lite/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -37,7 +37,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOv5Lite C++ Interface
@@ -90,4 +90,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov5lite/python/README.md b/examples/vision/detection/yolov5lite/python/README.md
index 55a05e1273..0abdc2d658 100644
--- a/examples/vision/detection/yolov5lite/python/README.md
+++ b/examples/vision/detection/yolov5lite/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv5Lite on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -79,4 +79,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv5Lite Model Description](..)
- [YOLOv5Lite C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov6/README.md b/examples/vision/detection/yolov6/README.md
index 6a63dc07e0..ae707ced85 100644
--- a/examples/vision/detection/yolov6/README.md
+++ b/examples/vision/detection/yolov6/README.md
@@ -6,7 +6,7 @@ English | [简体中文](README_CN.md)
- The YOLOv6 deployment is based on [YOLOv6](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) and [Pre-trained Model Based on COCO](https://github.com/meituan/YOLOv6/releases/tag/0.1.0).
- (1)The *.onnx provided by [Official Repository](https://github.com/meituan/YOLOv6/releases/tag/0.1.0) can directly conduct deployemnt;
- - (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (2)Personal models trained by developers should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
diff --git a/examples/vision/detection/yolov6/cpp/README.md b/examples/vision/detection/yolov6/cpp/README.md
index 75633dfead..4a9411930b 100755
--- a/examples/vision/detection/yolov6/cpp/README.md
+++ b/examples/vision/detection/yolov6/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -57,7 +57,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOv6 C++ Interface
@@ -110,4 +110,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov6/python/README.md b/examples/vision/detection/yolov6/python/README.md
index 82d6ce4abd..789df97474 100755
--- a/examples/vision/detection/yolov6/python/README.md
+++ b/examples/vision/detection/yolov6/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeployEnvironment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeployEnvironment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv6 on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -93,4 +93,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv6 Model Description](..)
- [YOLOv6 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov7/README.md b/examples/vision/detection/yolov7/README.md
index 7a89007a76..153ab13d50 100644
--- a/examples/vision/detection/yolov7/README.md
+++ b/examples/vision/detection/yolov7/README.md
@@ -4,7 +4,7 @@ English | [简体中文](README_CN.md)
- YOLOv7 deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branching code, and [COCO Pre-Trained Models](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1).
- - (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#export ONNX model) operation; *.trt and *.pose models do not support deployment.
+ - (1)The *.pt provided by the [Official Library](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) can be deployed after the [export ONNX model](#Export-ONNX-Model) operation; *.trt and *.pose models do not support deployment.
- (2)As for YOLOv7 model trained on customized data, please follow the operations guidelines in [Export ONNX model](#Export-ONNX-Model) and then refer to [Detailed Deployment Tutorials](#Detailed-Deployment-Tutorials) to complete the deployment.
## Export ONNX Model
diff --git a/examples/vision/detection/yolov7/cpp/README.md b/examples/vision/detection/yolov7/cpp/README.md
index ab5e086074..e36875e0cd 100755
--- a/examples/vision/detection/yolov7/cpp/README.md
+++ b/examples/vision/detection/yolov7/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -50,7 +50,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOv7 C++ Interface
@@ -103,4 +103,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov7/python/README.md b/examples/vision/detection/yolov7/python/README.md
index 2df6126ede..17cb54b6b0 100755
--- a/examples/vision/detection/yolov7/python/README.md
+++ b/examples/vision/detection/yolov7/python/README.md
@@ -5,8 +5,8 @@ English | [简体中文](README_CN.md)
Two steps before deployment:
-- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. The hardware and software environment meets the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Please refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This doc provides a quick `infer.py` demo of YOLOv7 deployment on CPU/GPU, and accelerated GPU deployment by TensorRT. Run the following command:
diff --git a/examples/vision/detection/yolov7end2end_ort/README.md b/examples/vision/detection/yolov7end2end_ort/README.md
index 3b393b7ff7..32395203c8 100644
--- a/examples/vision/detection/yolov7end2end_ort/README.md
+++ b/examples/vision/detection/yolov7end2end_ort/README.md
@@ -4,8 +4,8 @@ English | [简体中文](README_CN.md)
The YOLOv7End2EndORT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1)branch code and [Pre-trained Model Based on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndORT is designed for the inference of exported End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndTRT is for the inference of End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version.
- - (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#导出ONNX模型) to complete the employment. The deployment of *.trt and *.pose models is not supported.
- - (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the employment. The deployment of *.trt and *.pose models is not supported.
+ - (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
## Export the ONNX Model
diff --git a/examples/vision/detection/yolov7end2end_ort/cpp/README.md b/examples/vision/detection/yolov7end2end_ort/cpp/README.md
index fc2128a1bf..077c4d1f2e 100644
--- a/examples/vision/detection/yolov7end2end_ort/cpp/README.md
+++ b/examples/vision/detection/yolov7end2end_ort/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -39,7 +39,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
Attention: YOLOv7End2EndORT is designed for the inference of End2End models with [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) among the YOLOv7 exported models. For models without nms, use YOLOv7 class for inference. For End2End models with [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111), use YOLOv7End2EndTRT for inference.
@@ -92,4 +92,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov7end2end_ort/python/README.md b/examples/vision/detection/yolov7end2end_ort/python/README.md
index 09cb20ccd0..bf09c9dc8e 100644
--- a/examples/vision/detection/yolov7end2end_ort/python/README.md
+++ b/examples/vision/detection/yolov7end2end_ort/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2End on CPU/GPU accelerated by TensorRT. The script is as follows
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv7End2EndORT Model Description](..)
- [YOLOv7End2EndORT C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov7end2end_trt/README.md b/examples/vision/detection/yolov7end2end_trt/README.md
index f05c53bdf1..24a9dcf516 100644
--- a/examples/vision/detection/yolov7end2end_trt/README.md
+++ b/examples/vision/detection/yolov7end2end_trt/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
The YOLOv7End2EndTRT deployment is based on [YOLOv7](https://github.com/WongKinYiu/yolov7/tree/v0.1) branch code and [Pre-trained Model Baesd on COCO](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1). Attention: YOLOv7End2EndTRT is designed for the inference of exported End2End models in the [TRT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L111) version in YOLOv7. YOLOv7 class is for the inference of models without nms. YOLOv7End2EndORT is for the inference of End2End models in the [ORT_NMS](https://github.com/WongKinYiu/yolov7/blob/main/models/experimental.py#L87) version.
- - (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#导出ONNX模型) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
- - (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (1)*.pt provided by [Official Repository](https://github.com/WongKinYiu/yolov7/releases/tag/v0.1) should [Export the ONNX Model](#Export-the-ONNX-Model) to complete the deployment. The deployment of *.trt and *.pose models is not supported.
+ - (2)The YOLOv7 model trained by personal data should [Export the ONNX Model](#%E5%AF%BC%E5%87%BAONNX%E6%A8%A1%E5%9E%8B). Please refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
diff --git a/examples/vision/detection/yolov7end2end_trt/cpp/README.md b/examples/vision/detection/yolov7end2end_trt/cpp/README.md
index c39a8105b0..b87f60a27e 100644
--- a/examples/vision/detection/yolov7end2end_trt/cpp/README.md
+++ b/examples/vision/detection/yolov7end2end_trt/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment o
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -87,4 +87,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolov7end2end_trt/python/README.md b/examples/vision/detection/yolov7end2end_trt/python/README.md
index be7b3eec7c..52b5cc1ebd 100644
--- a/examples/vision/detection/yolov7end2end_trt/python/README.md
+++ b/examples/vision/detection/yolov7end2end_trt/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl p ackage. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl p ackage. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOv7End2EndTRT accelerated by TensorRT. The script is as follows
```bash
@@ -78,4 +78,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOv7End2EndTRT Model Description](..)
- [YOLOv7End2EndTRT C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolox/README.md b/examples/vision/detection/yolox/README.md
index 13fe8ea4d2..734bf840d0 100644
--- a/examples/vision/detection/yolox/README.md
+++ b/examples/vision/detection/yolox/README.md
@@ -2,10 +2,10 @@ English | [简体中文](README_CN.md)
# YOLOX Ready-to-deploy Model
-- The YOLOX deployment is based on [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) and [coco's pre-trained models](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0)。
+- The YOLOX deployment is based on [YOLOX](https://github.com/Megvii-BaseDetection/YOLOX/tree/0.1.1rc0) and [coco's pre-trained models](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0).
- (1)The *.pth provided by [Official Repository](https://github.com/Megvii-BaseDetection/YOLOX/releases/tag/0.1.1rc0) should export the ONNX model to complete the deployment;
- - (2)The YOLOX model trained by personal data should export the ONNX model. Refer to [Detailed Deployment Documents](#详细部署文档) to complete the deployment.
+ - (2)The YOLOX model trained by personal data should export the ONNX model. Refer to [Detailed Deployment Documents](#Detailed-Deployment-Documents) to complete the deployment.
diff --git a/examples/vision/detection/yolox/cpp/README.md b/examples/vision/detection/yolox/cpp/README.md
index f7343de01a..d415bf54b3 100644
--- a/examples/vision/detection/yolox/cpp/README.md
+++ b/examples/vision/detection/yolox/cpp/README.md
@@ -6,8 +6,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -39,7 +39,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## YOLOX C++ Interface
@@ -94,4 +94,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/detection/yolox/python/README.md b/examples/vision/detection/yolox/python/README.md
index 308802698b..5050a7b2b3 100644
--- a/examples/vision/detection/yolox/python/README.md
+++ b/examples/vision/detection/yolox/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Two steps before deployment
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of YOLOX on CPU/GPU and GPU accelerated by TensorRT. The script is as follows
@@ -77,4 +77,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [YOLOX Model Description](..)
- [YOLOX C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/face_landmark_1000/cpp/README.md b/examples/vision/facealign/face_landmark_1000/cpp/README.md
index 00a5c2b40f..a33d6ba8e6 100644
--- a/examples/vision/facealign/face_landmark_1000/cpp/README.md
+++ b/examples/vision/facealign/face_landmark_1000/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation.
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.2 or above (x.x.x>=1.0.2), or nightly built version is required to support this model.
@@ -38,7 +38,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## FaceLandmark1000 C++ Interface
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/face_landmark_1000/python/README.md b/examples/vision/facealign/face_landmark_1000/python/README.md
index 7521023480..ab03ad99a2 100644
--- a/examples/vision/facealign/face_landmark_1000/python/README.md
+++ b/examples/vision/facealign/face_landmark_1000/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of FaceLandmark1000 models on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model. The script is as follows
@@ -68,4 +68,4 @@ FaceLandmark1000 model loading and initialization, among which model_file is the
- [FaceLandmark1000 Model Description](..)
- [FaceLandmark1000 C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/pfld/cpp/README.md b/examples/vision/facealign/pfld/cpp/README.md
index c2040faa07..221d90c02c 100644
--- a/examples/vision/facealign/pfld/cpp/README.md
+++ b/examples/vision/facealign/pfld/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 1.0.2 or above (x.x.x>=1.0.2), or the nightly built version is required to support this model.
@@ -38,7 +38,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## PFLD C++ Interface
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/pfld/python/README.md b/examples/vision/facealign/pfld/python/README.md
index 25f665eb55..03e5b07eff 100755
--- a/examples/vision/facealign/pfld/python/README.md
+++ b/examples/vision/facealign/pfld/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of PFLD on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.6.0 or above is required to support this model. The script is as follows
@@ -39,7 +39,7 @@ fd.vision.facealign.PFLD(model_file, params_file=None, runtime_option=None, mode
PFLD model loading and initialization, among which model_file is the exported ONNX model format
-**参数**
+**Parameters**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. No need to set when the model is in ONNX format
@@ -68,4 +68,4 @@ PFLD model loading and initialization, among which model_file is the exported ON
- [PFLD Model Description](..)
- [PFLD C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/pipnet/cpp/README.md b/examples/vision/facealign/pipnet/cpp/README.md
index cd55a20f6f..8dab95bc49 100644
--- a/examples/vision/facealign/pipnet/cpp/README.md
+++ b/examples/vision/facealign/pipnet/cpp/README.md
@@ -5,8 +5,8 @@ This directory provides examples that `infer.cc` fast finishes the deployment of
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory. FastDeploy version 0.7.0 or above (x.x.x>=0.7.0) is required to support this model.
@@ -38,7 +38,7 @@ The visualized result after running is as follows
The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
-- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)
+- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/en/faq/use_sdk_on_windows.md)
## PIPNet C++ Interface
@@ -83,4 +83,4 @@ Users can modify the following pre-processing parameters to their needs, which a
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/pipnet/python/README.md b/examples/vision/facealign/pipnet/python/README.md
index 297a0bfd19..1d0ab6e199 100644
--- a/examples/vision/facealign/pipnet/python/README.md
+++ b/examples/vision/facealign/pipnet/python/README.md
@@ -3,8 +3,8 @@ English | [简体中文](README_CN.md)
Before deployment, two steps require confirmation
-- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
+- 2. Install FastDeploy Python whl package. Refer to [FastDeploy Python Installation](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
This directory provides examples that `infer.py` fast finishes the deployment of PIPNet on CPU/GPU and GPU accelerated by TensorRT. FastDeploy version 0.7.0 or above is required to support this model. The script is as follows
@@ -69,4 +69,4 @@ PIPNet model loading and initialization, among which model_file is the exported
- [PIPNet Model Description](..)
- [PIPNet C++ Deployment](../cpp)
- [Model Prediction Results](../../../../../docs/api/vision_results/)
-- [How to switch the model inference backend engine](../../../../../docs/cn/faq/how_to_change_backend.md)
+- [How to switch the model inference backend engine](../../../../../docs/en/faq/how_to_change_backend.md)
diff --git a/examples/vision/facealign/pipnet/python/README_CN.md b/examples/vision/facealign/pipnet/python/README_CN.md
index e3dae73337..8b1562c8cf 100644
--- a/examples/vision/facealign/pipnet/python/README_CN.md
+++ b/examples/vision/facealign/pipnet/python/README_CN.md
@@ -1,34 +1,73 @@
[English](README.md) | 简体中文
-# PIPNet 模型部署
-## 模型版本说明
+# PIPNet Python部署示例
-- [PIPNet](https://github.com/jhb86253817/PIPNet/tree/b9eab58)
+在部署前,需确认以下两个步骤
-## 支持模型列表
+- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
+- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
-目前FastDeploy支持如下模型的部署
+本目录下提供`infer.py`快速完成PIPNet在CPU/GPU,以及GPU上通过TensorRT加速部署的示例,保证 FastDeploy 版本 >= 0.7.0 支持PIPNet模型。执行如下脚本即可完成
-- [PIPNet 模型](https://github.com/jhb86253817/PIPNet)
+```bash
+#下载部署示例代码
+git clone https://github.com/PaddlePaddle/FastDeploy.git
+cd FastDeploy/examples/vision/facealign/pipnet/python
-## 下载预训练模型
+# 下载PIPNet模型文件和测试图片以及视频
+## 原版ONNX模型
+wget https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet18_10x19x32x256_aflw.onnx
+wget https://bj.bcebos.com/paddlehub/fastdeploy/facealign_input.png
-为了方便开发者的测试,下面提供了PIPNet导出的各系列模型,开发者可直接下载使用。
+# CPU推理
+python infer.py --model pipnet_resnet18_10x19x32x256_aflw.onnx --image facealign_input.png --device cpu
+# GPU推理
+python infer.py --model pipnet_resnet18_10x19x32x256_aflw.onnx --image facealign_input.png --device gpu
+# TRT推理
+python infer.py --model pipnet_resnet18_10x19x32x256_aflw.onnx --image facealign_input.png --device gpu --backend trt
+```
-| 模型 | 参数大小 | 精度 | 备注 |
-|:---------------------------------------------------------------- |:----- |:----- | :------ |
-| [PIPNet19_ResNet18_AFLW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet18_10x19x32x256_aflw.onnx) | 45.6M | - |
-| [PIPNet29_ResNet18_COFW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet18_10x29x32x256_cofw.onnx) | 46.1M | - |
-| [PIPNet68_ResNet18_300W](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet18_10x68x32x256_300w.onnx) | 47.9M | - |
-| [PIPNet98_ResNet18_WFLW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet18_10x98x32x256_wflw.onnx) | 49.3M | - |
-| [PIPNet19_ResNet101_AFLW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet101_10x19x32x256_aflw.onnx) | 173.4M | - |
-| [PIPNet29_ResNet101_COFW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet101_10x29x32x256_cofw.onnx) | 175.3M | - |
-| [PIPNet68_ResNet101_300W](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet101_10x68x32x256_300w.onnx) | 182.6M | - |
-| [PIPNet98_ResNet101_WFLW](https://bj.bcebos.com/paddlehub/fastdeploy/pipnet_resnet101_10x98x32x256_wflw.onnx) | 188.3M | - |
+运行完成可视化结果如下图所示
+