Skip to content

Commit

Permalink
[Model] Add facedet model: CenterFace (PaddlePaddle#1131)
Browse files Browse the repository at this point in the history
* cpp example run success

* add landmarks

* fix reviewed problem

* add pybind

* add readme in examples

* fix reviewed problem

* new file:   tests/models/test_centerface.py

* fix reviewed problem 230202
  • Loading branch information
GodIsBoom authored Feb 7, 2023
1 parent d3d9148 commit 1c115bb
Show file tree
Hide file tree
Showing 21 changed files with 1,369 additions and 0 deletions.
25 changes: 25 additions & 0 deletions examples/vision/facedet/centerface/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
English | [简体中文](README_CN.md)

# CenterFace Ready-to-deploy Model

- The deployment of the CenterFace model is based on [CenterFace](https://github.com/Star-Clouds/CenterFace.git) and [Pre-trained Model Based on WIDER FACE](https://github.com/Star-Clouds/CenterFace.git)
- (1)The *.onnx provided by [Official Repository](https://github.com/Star-Clouds/CenterFace.git) can be deployed directly;
- (2)The CenterFace train code is not open source and users cannot train it.


## Download Pre-trained ONNX Model

For developers' testing, models exported by CenterFace are provided below. Developers can download them directly. (The accuracy in the following table is derived from the source official repository on WIDER FACE test set)
| Model | Size | Accuracy(Easy Set,Medium Set,Hard Set) | Note |
|:---------------------------------------------------------------- |:----- |:----- |:---- |
| [CenterFace](https://bj.bcebos.com/paddlehub/fastdeploy/CenterFace.onnx) | 7.2MB | 93.2%,92.1%,87.3% | This model file is sourced from [CenterFace](https://github.com/Star-Clouds/CenterFace.git),MIT license |


## Detailed Deployment Documents

- [Python Deployment](python)
- [C++ Deployment](cpp)

## Release Note

- Document and code are based on [CenterFace](https://github.com/Star-Clouds/CenterFace.git)
24 changes: 24 additions & 0 deletions examples/vision/facedet/centerface/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
[English](README.md) | 简体中文
# CenterFace准备部署模型

- CenterFace部署模型实现来自[CenterFace](https://github.com/Star-Clouds/CenterFace.git),和[基于WIDER FACE的预训练模型](https://github.com/Star-Clouds/CenterFace.git)
- (1)[官方库](https://github.com/Star-Clouds/CenterFace.git)提供的*.onnx可直接进行部署;
- (2)由于CenterFace未开放训练源代码,开发者无法基于自己的数据训练CenterFace模型


## 下载预训练ONNX模型

为了方便开发者的测试,下面提供了CenterFace导出的模型,开发者可直接下载使用。(下表中模型的精度来源于源官方库在WIDER FACE测试集上的结果)
| 模型 | 大小 | 精度(Easy Set,Medium Set,Hard Set) | 备注 |
|:---------------------------------------------------------------- |:----- |:----- |:---- |
| [CenterFace](https://bj.bcebos.com/paddlehub/fastdeploy/CenterFace.onnx) | 7.2MB | 93.2%,92.1%,87.3% | 此模型文件来源于[CenterFace](https://github.com/Star-Clouds/CenterFace.git),MIT license |


## 详细部署文档

- [Python部署](python)
- [C++部署](cpp)

## 版本说明

- 本版本文档和代码基于[CenterFace](https://github.com/Star-Clouds/CenterFace.git) 编写
14 changes: 14 additions & 0 deletions examples/vision/facedet/centerface/cpp/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
PROJECT(infer_demo C CXX)
CMAKE_MINIMUM_REQUIRED (VERSION 3.10)

# Specifies the path to the fastdeploy library after you have downloaded it
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")

include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)

# Include the FastDeploy dependency header file
include_directories(${FASTDEPLOY_INCS})

add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
# Add the FastDeploy library dependency
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
78 changes: 78 additions & 0 deletions examples/vision/facedet/centerface/cpp/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
English | [简体中文](README_CN.md)
# CenterFace C++ Deployment Example

This directory provides examples that `infer.cc` fast finishes the deployment of CenterFace on CPU/GPU and GPU accelerated by TensorRT.

Before deployment, two steps require confirmation

- 1. Software and hardware should meet the requirements. Please refer to [FastDeploy Environment Requirements](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)
- 2. Download the precompiled deployment library and samples code according to your development environment. Refer to [FastDeploy Precompiled Library](../../../../../docs/en/build_and_install/download_prebuilt_libraries.md)

Taking the CPU inference on Linux as an example, the compilation test can be completed by executing the following command in this directory.

```bash
mkdir build
cd build
# Download the FastDeploy precompiled library. Users can choose your appropriate version in the `FastDeploy Precompiled Library` mentioned above
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz # x.x.x > 1.0.4
tar xvf fastdeploy-linux-x64-x.x.x.tgz # x.x.x > 1.0.4
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x # x.x.x > 1.0.4
make -j

# Download the official converted CenterFace model files and test images
wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/lite/resources/test_lite_face_detector_3.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/CenterFace.onnx

# Use CenterFace.onnx model
# CPU inference
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 0
# GPU inference
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 1
# TensorRT inference on GPU
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 2
```

The visualized result after running is as follows

<img width="640" src="https://user-images.githubusercontent.com/44280887/215670067-e14b5205-e303-4c3a-9812-be4a81173dc6.jpg">

The above command works for Linux or MacOS. For SDK use-pattern in Windows, refer to:
- [How to use FastDeploy C++ SDK in Windows](../../../../../docs/cn/faq/use_sdk_on_windows.md)

## CenterFace C++ Interface

### CenterFace Class

```c++
fastdeploy::vision::facedet::CenterFace(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
CenterFace model loading and initialization, among which model_file is the exported ONNX model format
**Parameter**
> * **model_file**(str): Model file path
> * **params_file**(str): Parameter file path. Only passing an empty string when the model is in ONNX format
> * **runtime_option**(RuntimeOption): Backend inference configuration. None by default, which is the default configuration
> * **model_format**(ModelFormat): Model format. ONNX format by default
#### Predict Function
> ```c++
> CenterFace::Predict(cv::Mat* im, FaceDetectionResult* result)
> ```
>
> Model prediction interface. Input images and output detection results.
>
> **Parameter**
>
> > * **im**: Input images in HWC or BGR format
> > * **result**: Detection results, including detection box and confidence of each box. Refer to [Vision Model Prediction Result](../../../../../docs/api/vision_results/) for FaceDetectionResult
- [Model Description](../../)
- [Python Deployment](../python)
- [Vision Model Prediction Results](../../../../../docs/api/vision_results/)
77 changes: 77 additions & 0 deletions examples/vision/facedet/centerface/cpp/README_CN.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# CenterFace C++部署示例

本目录下提供`infer.cc`快速完成CenterFace在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。

在部署前,需确认以下两个步骤

- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/cn/build_and_install/download_prebuilt_libraries.md)

以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试

```bash
mkdir build
cd build
# 下载FastDeploy预编译库,用户可在上文提到的`FastDeploy预编译库`中自行选择合适的版本使用
wget https://bj.bcebos.com/fastdeploy/release/cpp/fastdeploy-linux-x64-x.x.x.tgz # x.x.x > 1.0.4
tar xvf fastdeploy-linux-x64-x.x.x.tgz # x.x.x > 1.0.4
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-x.x.x # x.x.x > 1.0.4
make -j

#下载官方转换好的CenterFace模型文件和测试图片
wget https://raw.githubusercontent.com/DefTruth/lite.ai.toolkit/main/examples/lite/resources/test_lite_face_detector_3.jpg
wget https://bj.bcebos.com/paddlehub/fastdeploy/CenterFace.onnx

#使用CenterFace.onnx模型
# CPU推理
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 0
# GPU推理
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 1
# GPU上TensorRT推理
./infer_demo CenterFace.onnx test_lite_face_detector_3.jpg 2
```

运行完成可视化结果如下图所示

<img width="640" src="https://user-images.githubusercontent.com/44280887/215670067-e14b5205-e303-4c3a-9812-be4a81173dc6.jpg">

以上命令只适用于Linux或MacOS, Windows下SDK的使用方式请参考:
- [如何在Windows中使用FastDeploy C++ SDK](../../../../../docs/cn/faq/use_sdk_on_windows.md)

## CenterFace C++接口

### CenterFace类

```c++
fastdeploy::vision::facedet::CenterFace(
const string& model_file,
const string& params_file = "",
const RuntimeOption& runtime_option = RuntimeOption(),
const ModelFormat& model_format = ModelFormat::ONNX)
```
CenterFace模型加载和初始化,其中model_file为导出的ONNX模型格式。
**参数**
> * **model_file**(str): 模型文件路径
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
> * **model_format**(ModelFormat): 模型格式,默认为ONNX格式
#### Predict函数
> ```c++
> CenterFace::Predict(cv::Mat* im, FaceDetectionResult* result)
> ```
>
> 模型预测接口,输入图像直接输出检测结果。
>
> **参数**
>
> > * **im**: 输入图像,注意需为HWC,BGR格式
> > * **result**: 检测结果,包括检测框,各个框的置信度, FaceDetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
- [模型介绍](../../)
- [Python部署](../python)
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
105 changes: 105 additions & 0 deletions examples/vision/facedet/centerface/cpp/infer.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#include "fastdeploy/vision.h"

void CpuInfer(const std::string& model_file, const std::string& image_file) {
auto model = fastdeploy::vision::facedet::CenterFace(model_file);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);

fastdeploy::vision::FaceDetectionResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::VisFaceDetection(im, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

void GpuInfer(const std::string& model_file, const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
auto model = fastdeploy::vision::facedet::CenterFace(model_file, "", option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);

fastdeploy::vision::FaceDetectionResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::VisFaceDetection(im, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

void TrtInfer(const std::string& model_file, const std::string& image_file) {
auto option = fastdeploy::RuntimeOption();
option.UseGpu();
option.UseTrtBackend();
option.SetTrtInputShape("images", {1, 3, 640, 640});
auto model = fastdeploy::vision::facedet::CenterFace(model_file, "", option);
if (!model.Initialized()) {
std::cerr << "Failed to initialize." << std::endl;
return;
}

auto im = cv::imread(image_file);

fastdeploy::vision::FaceDetectionResult res;
if (!model.Predict(im, &res)) {
std::cerr << "Failed to predict." << std::endl;
return;
}
std::cout << res.Str() << std::endl;

auto vis_im = fastdeploy::vision::VisFaceDetection(im, res);
cv::imwrite("vis_result.jpg", vis_im);
std::cout << "Visualized result saved in ./vis_result.jpg" << std::endl;
}

int main(int argc, char* argv[]) {
if (argc < 4) {
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
"e.g ./infer_model yolov5s-face.onnx ./test.jpeg 0"
<< std::endl;
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
"with gpu; 2: run with gpu and use tensorrt backend."
<< std::endl;
return -1;
}

if (std::atoi(argv[3]) == 0) {
CpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 1) {
GpuInfer(argv[1], argv[2]);
} else if (std::atoi(argv[3]) == 2) {
TrtInfer(argv[1], argv[2]);
}
return 0;
}
Loading

0 comments on commit 1c115bb

Please sign in to comment.