# Use POT to Quantize yolo-v4-tiny-tf Public Model on ov 2023.1
###### tags: `POT`
## Use OpenVINO dockerhub image
```
docker run -it -v ~/Downloads:/mnt -u root --rm openvino/ubuntu22_dev:latest
```
## Run Accuracy Checker and POT
In openvino/ubuntu22_dev docker image,
#### 0. Download COCO 2017 trainval dataset and annotation
cd /home/openvino
apt update
apt install unzip
mkdir coco_dataset
cd coco_dataset
curl http://images.cocodataset.org/zips/val2017.zip -o val2017.zip
unzip val2017.zip
curl http://images.cocodataset.org/annotations/annotations_trainval2017.zip -o trainval_2017.zip
unzip trainval_2017.zip
##### coco_dataset content
```
coco_dataset/
|--annotations/
|-- captions_train2017.json
|-- captions_val2017.json
|-- instances_train2017.json
|-- instances_val2017.json
|-- person_keypoints_train2017.json
`-- person_keypoints_val2017.json
|-- val2017/
|-- 000000042102.jpg
|-- 000000060102.jpg
|-- 000000245102.jpg
...
`-- 000000364102.jpg
```
#### 1. Download yolo-v4-tiny-tf
omz_downloader --name yolo-v4-tiny-tf -o /home/openvino/openvino_models
#### 2. Convert yolo-v4-tiny-tf to IR
omz_converter --name yolo-v4-tiny-tf -d /home/openvino/openvino_models -o /home/openvino/openvino_models
#### 3. Run Accuracy Checker on yolo-v4-tiny-tf
cd /home/openvino
git clone https://github.com/openvinotoolkit/open_model_zoo.git
accuracy_check -c yolo-v4-tiny-tf-int8.yml -m /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16 -d /home/openvino/open_model_zoo/data/dataset_definitions.yml -ss 300
#### 4. Run POT on yolo-v4-tiny-tf
pot -c yolo-v4-tiny-tf-int8.json -e
#### 5. Copy yolo-v4-tiny-tf FP16-INT8 IR
mkdir /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16-INT8/
cp -ar results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09/optimized/* /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16-INT8/
## Referece
```
drwxr-xr-x 1 root root 4096 Sep 27 11:13 coco_dataset
drwxr-xr-x 1 root root 4096 Sep 27 11:13 open_model_zoo
drwxr-xr-x 3 root root 4096 Sep 27 11:13 openvino_models
drwxr-xr-x 3 root root 4096 Sep 27 11:21 resultsresults/yolo-v4-tiny-tf_DefaultQuantization
results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09/optimized/yolo-v4-tiny-tf.bin
results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09/optimized/yolo-v4-tiny-tf.xml
results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09/optimized/yolo-v4-tiny-tf.mapping
results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09/log.txt
-rw-r--r-- 1 root root 620 Sep 27 11:21 yolo-v4-tf-tiny-int8.json
-rw-r--r-- 1 root root 1617 Sep 27 11:17 yolo-v4-tf-tiny-int8.yml
openvino_models/
`-- public
`-- yolo-v4-tiny-tf
|-- FP16
| |-- yolo-v4-tiny-tf.bin
| |-- yolo-v4-tiny-tf.mapping
| `-- yolo-v4-tiny-tf.xml
|-- FP16-INT8
| |-- yolo-v4-tiny-tf.bin
| `-- yolo-v4-tiny-tf.xml
|-- FP32
| |-- yolo-v4-tiny-tf.bin
| |-- yolo-v4-tiny-tf.mapping
| `-- yolo-v4-tiny-tf.xml
|-- yolo-v3.json
`-- yolo-v3.pb
```
Note : Copy openvino_models foler to /mnt folder. They will be accessable in ~/Downloads folder in the host and /mnt in the container.
### yolo-v4-tiny-tf-int8.yml
```
models:
- name: yolo-v4-tiny-tf
launchers:
- framework: dlsdk
device: CPU
adapter:
type: yolo_v3
anchors: 10,14,23,27,37,58,81,82,135,169,344,319
num: 2
coords: 4
classes: 80
threshold: 0.001
anchor_masks: [[1, 2, 3], [3, 4, 5]]
raw_output: True
output_format: HWB
outputs:
- conv2d_20/BiasAdd
- conv2d_17/BiasAdd
datasets:
- name: ms_coco_detection_80_class_without_background
annotation_conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
data_source: /home/openvino/coco_dataset/val2017
preprocessing:
- type: resize
size: 416
postprocessing:
- type: resize_prediction_boxes
- type: filter
apply_to: prediction
min_confidence: 0.001
remove_filtered: true
- type: diou_nms
overlap: 0.5
- type: clip_boxes
apply_to: prediction
metrics:
- type: map
integral: 11point
ignore_difficult: true
presenter: print_scalar
reference: 0.4037
- name: AP@0.5
type: coco_precision
max_detections: 100
threshold: 0.5
reference: 0.4636
- name: AP@0.5:0.05:95
type: coco_precision
max_detections: 100
threshold: '0.5:0.05:0.95'
reference: 0.2266
```
### yolo-v4-tiny-tf-int8.json
```
{
"model": {
"model_name": "yolo-v4-tiny-tf",
"model": "/home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16/yolo-v4-tiny-tf.xml",
"weights": "/home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16/yolo-v4-tiny-tf.bin"
},
"engine": {
"config": "/home/openvino/yolo-v4-tiny-tf-int8.yml"
},
"compression": {
"algorithms": [
{
"name": "DefaultQuantization",
"params": {
"preset": "performance",
"stat_subset_size": 300
}
}
]
}
}
```
### accuracy_checker log
```
root@0c5a76075d21:/home/openvino# accuracy_check -c yolo-v4-tf-tiny-int8.yml -m /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16 -d /home/openvino/open_model_zoo/data/dataset_definitions.yml -ss 300 -td CPU
/usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/preprocessor/launcher_preprocessing/ie_preprocessor.py:21: FutureWarning: OpenVINO Inference Engine Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
from openvino.inference_engine import ResizeAlgorithm, PreProcessInfo, ColorFormat, MeanVariant # pylint: disable=import-outside-toplevel,package-absolute-imports
/usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/launcher/dlsdk_launcher.py:60: FutureWarning: OpenVINO nGraph Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
import ngraph as ng
Processing info:
model: yolo-v4-tiny-tf
launcher: openvino
device: CPU
dataset: ms_coco_detection_80_class_without_background
OpenCV version: 4.8.0-dev
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.278s
200 / 5000 processed in 0.268s
300 / 5000 processed in 0.262s
400 / 5000 processed in 0.267s
500 / 5000 processed in 0.297s
600 / 5000 processed in 0.266s
700 / 5000 processed in 0.266s
800 / 5000 processed in 0.276s
900 / 5000 processed in 0.301s
1000 / 5000 processed in 0.267s
1100 / 5000 processed in 0.290s
1200 / 5000 processed in 0.285s
1300 / 5000 processed in 0.269s
1400 / 5000 processed in 0.307s
1500 / 5000 processed in 0.296s
1600 / 5000 processed in 0.286s
1700 / 5000 processed in 0.295s
1800 / 5000 processed in 0.291s
1900 / 5000 processed in 0.322s
2000 / 5000 processed in 0.276s
2100 / 5000 processed in 0.267s
2200 / 5000 processed in 0.289s
2300 / 5000 processed in 0.321s
2400 / 5000 processed in 0.298s
2500 / 5000 processed in 0.296s
2600 / 5000 processed in 0.289s
2700 / 5000 processed in 0.267s
2800 / 5000 processed in 0.275s
2900 / 5000 processed in 0.260s
3000 / 5000 processed in 0.276s
3100 / 5000 processed in 0.280s
3200 / 5000 processed in 0.274s
3300 / 5000 processed in 0.296s
3400 / 5000 processed in 0.292s
3500 / 5000 processed in 0.326s
3600 / 5000 processed in 0.339s
3700 / 5000 processed in 0.333s
3800 / 5000 processed in 0.288s
3900 / 5000 processed in 0.264s
4000 / 5000 processed in 0.263s
4100 / 5000 processed in 0.275s
4200 / 5000 processed in 0.275s
4300 / 5000 processed in 0.317s
4400 / 5000 processed in 0.287s
4500 / 5000 processed in 0.275s
4600 / 5000 processed in 0.266s
4700 / 5000 processed in 0.301s
4800 / 5000 processed in 0.315s
4900 / 5000 processed in 0.321s
5000 / 5000 processed in 0.272s
5000 objects processed in 14.363 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
ms_coco_detection_80_class_without_background dataset metadata will be saved to mscoco_det_80.json
Converted annotation for ms_coco_detection_80_class_without_background dataset will be saved to mscoco_det_80.pickle
2023-09-27 11:18:13.857857: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-09-27 11:18:13.859194: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-09-27 11:18:13.878247: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-09-27 11:18:13.878415: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-27 11:18:14.209735: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
IE version: 2023.1.0-12185-9e6b00e51cd-releases/2023/1
Loaded CPU plugin version:
CPU - openvino_intel_cpu_plugin: 2023.1.2023.1.0-12185-9e6b00e51cd-releases/2023/1
Found model /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16/yolo-v4-tiny-tf.xml
Found weights /home/openvino/openvino_models/public/yolo-v4-tiny-tf/FP16/yolo-v4-tiny-tf.bin
Input info:
Node name: image_input
Tensor names: image_input:0, image_input
precision: f32
shape: (1, 416, 416, 3)
Output info
Node name: conv2d_17/BiasAdd:0
Tensor names: conv2d_17/BiasAdd, conv2d_17/BiasAdd:0
precision: f32
shape: (1, 13, 13, 255)
Node name: conv2d_20/BiasAdd:0
Tensor names: conv2d_20/BiasAdd:0, conv2d_20/BiasAdd
precision: f32
shape: (1, 26, 26, 255)
11:18:14 accuracy_checker WARNING: /usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/adapters/yolo.py:414: UserWarning: Number of output layers (2) not equal to detection grid size (3). Must be equal with each other, if output tensor resize is required
warnings.warn('Number of output layers ({}) not equal to detection grid size ({}). '
300 objects processed in 15.556 seconds
map: 45.02% [FAILED: abs error = 4.647 | relative error = 0.1151]
AP@0.5: 51.22% [FAILED: abs error = 4.864 | relative error = 0.1049]
AP@0.5:0.05:95: 27.78% [FAILED: abs error = 5.119 | relative error = 0.2259]
```
### POT log
```
root@0c5a76075d21:/home/openvino# pot -c yolo-v4-tf-tiny-int8.json -e
[ DEBUG ] Creating converter from 7 to 5
[ DEBUG ] Creating converter from 5 to 7
[ DEBUG ] Creating converter from 7 to 5
[ DEBUG ] Creating converter from 5 to 7
[ WARNING ] /usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/preprocessor/launcher_preprocessing/ie_preprocessor.py:21: FutureWarning: OpenVINO Inference Engine Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
from openvino.inference_engine import ResizeAlgorithm, PreProcessInfo, ColorFormat, MeanVariant # pylint: disable=import-outside-toplevel,package-absolute-imports
[ WARNING ] /usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/launcher/dlsdk_launcher.py:60: FutureWarning: OpenVINO nGraph Python API is deprecated and will be removed in 2024.0 release.For instructions on transitioning to the new API, please refer to https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
import ngraph as ng
Post-training Optimization Tool is deprecated and will be removed in the future. Please use Neural Network Compression Framework instead: https://github.com/openvinotoolkit/nncf
Nevergrad package could not be imported. If you are planning to use any hyperparameter optimization algo, consider installing it using pip. This implies advanced usage of the tool. Note that nevergrad is compatible only with Python 3.7+
Post-training Optimization Tool is deprecated and will be removed in the future. Please use Neural Network Compression Framework instead: https://github.com/openvinotoolkit/nncf
INFO:openvino.tools.pot.app.run:Output log dir: ./results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09
INFO:openvino.tools.pot.app.run:Creating pipeline:
Algorithm: DefaultQuantization
Parameters:
preset : performance
stat_subset_size : 300
target_device : ANY
model_type : None
dump_intermediate_model : False
inplace_statistics : True
exec_log_dir : ./results/yolo-v4-tiny-tf_DefaultQuantization/2023-09-27_11-21-09
===========================================================================
IE version: 2023.1.0-12185-9e6b00e51cd-releases/2023/1
Loaded CPU plugin version:
CPU - openvino_intel_cpu_plugin: 2023.1.2023.1.0-12185-9e6b00e51cd-releases/2023/1
11:21:10 accuracy_checker WARNING: /usr/local/lib/python3.10/dist-packages/openvino/tools/accuracy_checker/adapters/yolo.py:414: UserWarning: Number of output layers (2) not equal to detection grid size (3). Must be equal with each other, if output tensor resize is required
warnings.warn('Number of output layers ({}) not equal to detection grid size ({}). '
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.283s
200 / 5000 processed in 0.276s
300 / 5000 processed in 0.303s
400 / 5000 processed in 0.281s
500 / 5000 processed in 0.274s
600 / 5000 processed in 0.276s
700 / 5000 processed in 0.276s
800 / 5000 processed in 0.278s
900 / 5000 processed in 0.276s
1000 / 5000 processed in 0.283s
1100 / 5000 processed in 0.278s
1200 / 5000 processed in 0.275s
1300 / 5000 processed in 0.279s
1400 / 5000 processed in 0.280s
1500 / 5000 processed in 0.279s
1600 / 5000 processed in 0.276s
1700 / 5000 processed in 0.277s
1800 / 5000 processed in 0.287s
1900 / 5000 processed in 0.276s
2000 / 5000 processed in 0.278s
2100 / 5000 processed in 0.275s
2200 / 5000 processed in 0.280s
2300 / 5000 processed in 0.274s
2400 / 5000 processed in 0.278s
2500 / 5000 processed in 0.278s
2600 / 5000 processed in 0.278s
2700 / 5000 processed in 0.278s
2800 / 5000 processed in 0.277s
2900 / 5000 processed in 0.278s
3000 / 5000 processed in 0.275s
3100 / 5000 processed in 0.276s
3200 / 5000 processed in 0.277s
3300 / 5000 processed in 0.278s
3400 / 5000 processed in 0.275s
3500 / 5000 processed in 0.277s
3600 / 5000 processed in 0.276s
3700 / 5000 processed in 0.280s
3800 / 5000 processed in 0.279s
3900 / 5000 processed in 0.276s
4000 / 5000 processed in 0.278s
4100 / 5000 processed in 0.276s
4200 / 5000 processed in 0.276s
4300 / 5000 processed in 0.276s
4400 / 5000 processed in 0.278s
4500 / 5000 processed in 0.274s
4600 / 5000 processed in 0.282s
4700 / 5000 processed in 0.279s
4800 / 5000 processed in 0.278s
4900 / 5000 processed in 0.278s
5000 / 5000 processed in 0.280s
5000 objects processed in 13.906 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
2023-09-27 11:21:24.750528: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-09-27 11:21:24.751787: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-09-27 11:21:24.770594: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-09-27 11:21:24.770769: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-09-27 11:21:25.162189: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
INFO:openvino.tools.pot.pipeline.pipeline:Inference Engine version: 2023.1.0-12185-9e6b00e51cd-releases/2023/1
INFO:openvino.tools.pot.pipeline.pipeline:Model Optimizer version: 2023.1.0-12185-9e6b00e51cd-releases/2023/1
INFO:openvino.tools.pot.pipeline.pipeline:Post-Training Optimization Tool version: 2023.1.0-12185-9e6b00e51cd-releases/2023/1
INFO:openvino.tools.pot.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
INFO:openvino.tools.pot.statistics.collector:Computing statistics finished
INFO:openvino.tools.pot.pipeline.pipeline:Start algorithm: DefaultQuantization
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
INFO:openvino.tools.pot.algorithms.quantization.default.algorithm:Computing statistics finished
11:22:05 accuracy_checker WARNING: /usr/local/lib/python3.10/dist-packages/openvino/tools/pot/algorithms/quantization/utils.py:318: FutureWarning: Unlike other reduction functions (e.g. `skew`, `kurtosis`), the default behavior of `mode` typically preserves the axis it acts along. In SciPy 1.11.0, this behavior will change: the default value of `keepdims` will become False, the `axis` over which the statistic is taken will be eliminated, and the value None will no longer be accepted. Set `keepdims` to True or False to avoid this warning.
input_shape = mode(activations_statistics[input_node_name]['shape'])[0][0]
INFO:openvino.tools.pot.pipeline.pipeline:Finished: DefaultQuantization
===========================================================================
INFO:openvino.tools.pot.pipeline.pipeline:Evaluation of generated model
INFO:openvino.tools.pot.engines.ac_engine:Start inference on the whole dataset
Total dataset size: 5000
1000 / 5000 processed in 33.289s
2000 / 5000 processed in 33.959s
3000 / 5000 processed in 33.307s
4000 / 5000 processed in 33.453s
5000 / 5000 processed in 31.832s
5000 objects processed in 165.842 seconds
INFO:openvino.tools.pot.engines.ac_engine:Inference finished
INFO:openvino.tools.pot.app.run:map : 0.3981603174540046
INFO:openvino.tools.pot.app.run:AP@0.5 : 0.45923282432168566
INFO:openvino.tools.pot.app.run:AP@0.5:0.05:95 : 0.2208863403266216
```