This tutorial demostrates how to apply INT8 quantization to the Natural Language Processing model BERT, using the Post-Training Optimization Tool API (part of OpenVINO). We will use HuggingFace BERT PyTorch model fine-tuned for Microsoft Research Paraphrase Corpus (MRPC) task. The code of the tutorial is designed to be extendable to custom models and datasets. It consists of the following steps:
- Download and prepare the MRPC model and dataset
- Define data loading and accuracy validation functionality
- Prepare the model for quantization
- Run optimization pipeline
- Compare performance of the original and quantized models