Skip to content

Commit

Permalink
Add emotion_ferplus and ultraface int8 models (#618)
Browse files Browse the repository at this point in the history
* upload emotion_ferplus and ultraface int8 models

Signed-off-by: yuwenzho <[email protected]>

* update ONNX_HUB_MANIFEST

Signed-off-by: yuwenzho <[email protected]>

---------

Signed-off-by: yuwenzho <[email protected]>
  • Loading branch information
yuwenzho authored Jul 12, 2023
1 parent e49c41d commit 69c5d37
Show file tree
Hide file tree
Showing 7 changed files with 174 additions and 1 deletion.
94 changes: 94 additions & 0 deletions ONNX_HUB_MANIFEST.json
Original file line number Diff line number Diff line change
Expand Up @@ -1073,6 +1073,48 @@
"model_with_data_bytes": 237272167
}
},
{
"model": "Emotion FERPlus int8",
"model_path": "vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.onnx",
"onnx_version": "1.14",
"opset_version": 12,
"metadata": {
"model_sha": "3e47195d79e9593294df9e81a6d296a1e10969b68a717284081c29493a0ff5f1",
"model_bytes": 19300656,
"tags": [
"vision",
"body analysis",
"emotion ferplus"
],
"io_ports": {
"inputs": [
{
"name": "Input3",
"shape": [
1,
1,
64,
64
],
"type": "tensor(float)"
}
],
"outputs": [
{
"name": "Plus692_Output_0",
"shape": [
1,
8
],
"type": "tensor(float)"
}
]
},
"model_with_data_path": "vision/body_analysis/emotion_ferplus/model/emotion-ferplus-12-int8.tar.gz",
"model_with_data_sha": "429cd3e9cdfc20330b76361c65e7aaeb4298b9edcd29282fd6f63b95aee00983",
"model_with_data_bytes": 18119569
}
},
{
"model": "Emotion FERPlus",
"model_path": "vision/body_analysis/emotion_ferplus/model/emotion-ferplus-2.onnx",
Expand Down Expand Up @@ -1175,6 +1217,58 @@
"model_with_data_bytes": 32384240
}
},
{
"model": "version-RFB-320-int8",
"model_path": "vision/body_analysis/ultraface/models/version-RFB-320-int8.onnx",
"onnx_version": "1.14",
"opset_version": 12,
"metadata": {
"model_sha": "093a9a55ff05fd71ace744593fd0c81eab8306d6d8c38b0892c5b3cff7f08265",
"model_bytes": 458144,
"tags": [
"vision",
"body analysis",
"ultraface"
],
"io_ports": {
"inputs": [
{
"name": "input",
"shape": [
1,
3,
240,
320
],
"type": "tensor(float)"
}
],
"outputs": [
{
"name": "scores",
"shape": [
1,
4420,
2
],
"type": "tensor(float)"
},
{
"name": "boxes",
"shape": [
1,
4420,
4
],
"type": "tensor(float)"
}
]
},
"model_with_data_path": "vision/body_analysis/ultraface/models/version-RFB-320-int8.tar.gz",
"model_with_data_sha": "a0010e6a58f9e4ca553efbd8d9cf539c8846fbd56e17e10c8b9aebc9f4d625d0",
"model_with_data_bytes": 1211441
}
},
{
"model": "version-RFB-320",
"model_path": "vision/body_analysis/ultraface/models/version-RFB-320.onnx",
Expand Down
31 changes: 31 additions & 0 deletions vision/body_analysis/emotion_ferplus/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ This model is a deep convolutional neural network for emotion recognition in fac
|Emotion FERPlus |[34 MB](model/emotion-ferplus-2.onnx)|[31 MB](model/emotion-ferplus-2.tar.gz)|1.0|2|
|Emotion FERPlus |[34 MB](model/emotion-ferplus-7.onnx)|[31 MB](model/emotion-ferplus-7.tar.gz)|1.2|7|
|Emotion FERPlus |[34 MB](model/emotion-ferplus-8.onnx)|[31 MB](model/emotion-ferplus-8.tar.gz)|1.3|8|
|Emotion FERPlus int8 |[19 MB](model/emotion-ferplus-12-int8.onnx)|[18 MB](model/emotion-ferplus-12-int8.tar.gz)|1.14|12|

### Paper
"Training Deep Networks for Facial Expression Recognition with Crowd-Sourced Label Distribution" [arXiv:1608.01041](https://arxiv.org/abs/1608.01041)
Expand Down Expand Up @@ -69,5 +70,35 @@ def postprocess(scores):
Sets of sample input and output files are provided in
* serialized protobuf TensorProtos (`.pb`), which are stored in the folders `test_data_set_*/`.

## Quantization
Emotion FERPlus int8 is obtained by quantizing fp32 Emotion FERPlus model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.


### Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-8.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('emotion-ferplus-8.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'emotion-ferplus-12.onnx')
```

### Model quantize

```bash
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/emotion_ferplus/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
```

## License
MIT
Git LFS file not shown
Git LFS file not shown
38 changes: 37 additions & 1 deletion vision/body_analysis/ultraface/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This model is a lightweight facedetection model designed for edge computing devi
| ------------- | ------------- | ------------- | ------------- | ------------- |
|version-RFB-320| [1.21 MB](models/version-RFB-320.onnx) | [1.92 MB](models/version-RFB-320.tar.gz) | 1.4 | 9 |
|version-RFB-640| [1.51 MB](models/version-RFB-640.onnx) | [4.59 MB](models/version-RFB-640.tar.gz) | 1.4 | 9 |
|version-RFB-320-int8| [0.44 MB](models/version-RFB-320-int8.onnx) | [1.2 MB](models/version-RFB-320-int8.tar.gz) | 1.14 | 12 |

### Dataset
The training set is the VOC format data set generated by using the cleaned widerface labels provided by [Retinaface](https://arxiv.org/pdf/1905.00641.pdf) in conjunction with the widerface [dataset](http://shuoyang1213.me/WIDERFACE/).
Expand Down Expand Up @@ -43,8 +44,43 @@ The model outputs two arrays `(1 x 4420 x 2)` and `(1 x 4420 x 4)` of scores and
### Postprocessing
In postprocessing, threshold filtration and [non-max suppression](dependencies/box_utils.py) are applied to the scores and boxes arrays.


## Quantization
version-RFB-320-int8 is obtained by quantizing fp32 version-RFB-320 model. We use [Intel® Neural Compressor](https://github.com/intel/neural-compressor) with onnxruntime backend to perform quantization. View the [instructions](https://github.com/intel/neural-compressor/blob/master/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static/README.md) to understand how to use Intel® Neural Compressor for quantization.


### Prepare Model
Download model from [ONNX Model Zoo](https://github.com/onnx/models).

```shell
wget https://github.com/onnx/models/raw/main/vision/body_analysis/ultraface/models/version-RFB-320.onnx
```

Convert opset version to 12 for more quantization capability.

```python
import onnx
from onnx import version_converter
model = onnx.load('version-RFB-320.onnx')
model = version_converter.convert_version(model, 12)
onnx.save_model(model, 'version-RFB-320-12.onnx')
```

### Model quantize

```bash
cd neural-compressor/examples/onnxrt/body_analysis/onnx_model_zoo/ultraface/quantization/ptq_static
bash run_tuning.sh --input_model=path/to/model \ # model path as *.onnx
--dataset_location=/path/to/data \
--output_model=path/to/save
```

## Contributors
Valery Asiryan ([asiryan](https://github.com/asiryan))

* [asiryan](https://github.com/asiryan)
* [yuwenzho](https://github.com/yuwenzho) (Intel)
* [ftian1](https://github.com/ftian1) (Intel)
* [hshen14](https://github.com/hshen14) (Intel)

## License
MIT
Git LFS file not shown
Git LFS file not shown

0 comments on commit 69c5d37

Please sign in to comment.