Skip to content

Commit

Permalink
upload emotion_ferplus and ultraface int8 models
Browse files Browse the repository at this point in the history
Signed-off-by: yuwenzho <[email protected]>
  • Loading branch information
yuwenzho committed Jun 28, 2023
1 parent e49c41d commit 43dfa3b
Show file tree
Hide file tree
Showing 6 changed files with 80 additions and 1 deletion.
31 changes: 31 additions & 0 deletions vision/body_analysis/emotion_ferplus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ This model is a deep convolutional neural network for emotion recognition in fac
|Emotion FERPlus |[34 MB](model/emotion-ferplus-2.onnx)|[31 MB](model/emotion-ferplus-2.tar.gz)|1.0|2|
|Emotion FERPlus |[34 MB](model/emotion-ferplus-7.onnx)|[31 MB](model/emotion-ferplus-7.tar.gz)|1.2|7|
|Emotion FERPlus |[34 MB](model/emotion-ferplus-8.onnx)|[31 MB](model/emotion-ferplus-8.tar.gz)|1.3|8|
|Emotion FERPlus int8 |[19 MB](model/emotion-ferplus-12-int8.onnx)|[18 MB](model/emotion-ferplus-12-int8.tar.gz)|1.14|12|

### Paper
"Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" [arXiv:1608.01041](https://arxiv.org/abs/1608.01041)
Expand Down Expand Up @@ -69,5 +70,35 @@ def postprocess(scores):
Sets of sample input and output files are provided in
* serialized protobuf TensorProtos (`.pb`), which are stored in the folders `test_data_set_*/`.

## Quantization
Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.


### Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('emotion-ferplus-8.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'emotion-ferplus-12.onnx')
```

### Model quantize

```bash
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
```

## License
MIT
Git LFS file not shown
Git LFS file not shown
38 changes: 37 additions & 1 deletion vision/body_analysis/ultraface/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This model is a lightweight facedetection model designed for edge computing devi
| ------------- | ------------- | ------------- | ------------- | ------------- |
|version-RFB-320| [1.21 MB](models/version-RFB-320.onnx) | [1.92 MB](models/version-RFB-320.tar.gz) | 1.4 | 9 |
|version-RFB-640| [1.51 MB](models/version-RFB-640.onnx) | [4.59 MB](models/version-RFB-640.tar.gz) | 1.4 | 9 |
|version-RFB-320-int8| [0.44 MB](models/version-RFB-320-int8.onnx) | [1.2 MB](models/version-RFB-320-int8.tar.gz) | 1.14 | 12 |

### Dataset
The training set is the VOC format data set generated by using the cleaned widerface labels provided by [Retinaface](https://arxiv.org/pdf/1905.00641.pdf) in conjunction with the widerface [dataset](http://shuoyang1213.me/WIDERFACE/).
Expand Down Expand Up @@ -43,8 +44,43 @@ The model outputs two arrays `(1 x 4420 x 2)` and `(1 x 4420 x 4)` of scores and
### Postprocessing
In postprocessing, threshold filtration and [non-max suppression](dependencies/box_utils.py) are applied to the scores and boxes arrays.


## Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.


### Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-320.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-320-12.onnx')
```

### Model quantize

```bash
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
```

## Contributors
Valery Asiryan ([asiryan](https://github.com/asiryan))

* [asiryan](https://github.com/asiryan)
* [yuwenzho](https://github.com/yuwenzho) (Intel)
* [ftian1](https://github.com/ftian1) (Intel)
* [hshen14](https://github.com/hshen14) (Intel)

## License
MIT
Git LFS file not shown
Git LFS file not shown

0 comments on commit 43dfa3b

Please sign in to comment.