diff --git a/vision/body_analysis/emotion_ferplus/README.md b/vision/body_analysis/emotion_ferplus/README.md index e351a4410..5307fef63 100644 --- a/vision/body_analysis/emotion_ferplus/README.md +++ b/vision/body_analysis/emotion_ferplus/README.md @@ -12,6 +12,7 @@ This model is a deep convolutional neural network for emotion recognition in fac |Emotion FERPlus |[34 MB](model/emotion-ferplus-2.onnx)|[31 MB](model/emotion-ferplus-2.tar.gz)|1.0|2| |Emotion FERPlus |[34 MB](model/emotion-ferplus-7.onnx)|[31 MB](model/emotion-ferplus-7.tar.gz)|1.2|7| |Emotion FERPlus |[34 MB](model/emotion-ferplus-8.onnx)|[31 MB](model/emotion-ferplus-8.tar.gz)|1.3|8| +|Emotion FERPlus int8 |[19 MB](model/emotion-ferplus-12-int8.onnx)|[18 MB](model/emotion-ferplus-12-int8.tar.gz)|1.14|12| ### Paper "Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" [arXiv:1608.01041](https://arxiv.org/abs/1608.01041) @@ -69,5 +70,35 @@ def postprocess(scores): Sets of sample input and output files are provided in * serialized protobuf TensorProtos (`.pb`), which are stored in the folders `test_data_set_*/`. +## Quantization +Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization. + + +### Prepare Model +Download model from [ONNX Model Zoo](https://github.com/onnx/models). + +```shell +wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx +``` + +Convert opset version to 12 for more quantization capability. + +```python +import onnx +from onnx import version_converter +model = onnx.load('emotion-ferplus-8.onnx') +model = version_converter.convert_version(model, 12) +onnx.save_model(model, 'emotion-ferplus-12.onnx') +``` + +### Model quantize + +```bash +cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static +bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx + --dataset_location=/path/to/data \ + --output_model=path/to/save +``` + ## License MIT diff --git a/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.onnx b/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.onnx new file mode 100644 index 000000000..8fedaef3d --- /dev/null +++ b/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e47195d79e9593294df9e81a6d296a1e10969b68a717284081c29493a0ff5f1 +size 19300656 diff --git a/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.tar.gz b/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.tar.gz new file mode 100644 index 000000000..c24d93338 --- /dev/null +++ b/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.tar.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:429cd3e9cdfc20330b76361c65e7aaeb4298b9edcd29282fd6f63b95aee00983 +size 18119569 diff --git a/vision/body_analysis/ultraface/README.md b/vision/body_analysis/ultraface/README.md index bbd06e320..6e1dd64d6 100644 --- a/vision/body_analysis/ultraface/README.md +++ b/vision/body_analysis/ultraface/README.md @@ -10,6 +10,7 @@ This model is a lightweight facedetection model designed for edge computing devi | ------------- | ------------- | ------------- | ------------- | ------------- | |version-RFB-320| [1.21 MB](models/version-RFB-320.onnx) | [1.92 MB](models/version-RFB-320.tar.gz) | 1.4 | 9 | |version-RFB-640| [1.51 MB](models/version-RFB-640.onnx) | [4.59 MB](models/version-RFB-640.tar.gz) | 1.4 | 9 | +|version-RFB-320-int8| [0.44 MB](models/version-RFB-320-int8.onnx) | [1.2 MB](models/version-RFB-320-int8.tar.gz) | 1.14 | 12 | ### Dataset The training set is the VOC format data set generated by using the cleaned widerface labels provided by [Retinaface](https://arxiv.org/pdf/1905.00641.pdf) in conjunction with the widerface [dataset](http://shuoyang1213.me/WIDERFACE/). @@ -43,8 +44,43 @@ The model outputs two arrays `(1 x 4420 x 2)` and `(1 x 4420 x 4)` of scores and ### Postprocessing In postprocessing, threshold filtration and [non-max suppression](dependencies/box_utils.py) are applied to the scores and boxes arrays. + +## Quantization +version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization. + + +### Prepare Model +Download model from [ONNX Model Zoo](https://github.com/onnx/models). + +```shell +wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx +``` + +Convert opset version to 12 for more quantization capability. + +```python +import onnx +from onnx import version_converter +model = onnx.load('version-RFB-320.onnx') +model = version_converter.convert_version(model, 12) +onnx.save_model(model, 'version-RFB-320-12.onnx') +``` + +### Model quantize + +```bash +cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static +bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx + --dataset_location=/path/to/data \ + --output_model=path/to/save +``` + ## Contributors -Valery Asiryan ([asiryan](https://github.com/asiryan)) + +* [asiryan](https://github.com/asiryan) +* [yuwenzho](https://github.com/yuwenzho) (Intel) +* [ftian1](https://github.com/ftian1) (Intel) +* [hshen14](https://github.com/hshen14) (Intel) ## License MIT diff --git a/vision/body_analysis/ultraface/models/version-RFB-320-int8.onnx b/vision/body_analysis/ultraface/models/version-RFB-320-int8.onnx new file mode 100644 index 000000000..5b26e1ff7 --- /dev/null +++ b/vision/body_analysis/ultraface/models/version-RFB-320-int8.onnx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:093a9a55ff05fd71ace744593fd0c81eab8306d6d8c38b0892c5b3cff7f08265 +size 458144 diff --git a/vision/body_analysis/ultraface/models/version-RFB-320-int8.tar.gz b/vision/body_analysis/ultraface/models/version-RFB-320-int8.tar.gz new file mode 100644 index 000000000..36687ad16 --- /dev/null +++ b/vision/body_analysis/ultraface/models/version-RFB-320-int8.tar.gz @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a0010e6a58f9e4ca553efbd8d9cf539c8846fbd56e17e10c8b9aebc9f4d625d0 +size 1211441