# Use POT to Quantize yolo-v4-tf Public Model
###### tags: `POT`
## Use OpenVINO dockerhub image
```
docker run -it -v ~/Downloads:/mnt -u root --rm openvino/ubuntu20_data_dev:latest
```
## Run Accuracy Checker and POT
In ubuntu20_data_dev docker image,
#### 0. Download COCO 2017 trainval dataset and annotation
```=bash
cd /home/openvino
apt update
apt install unzip
mkdir coco_dataset
cd coco_dataset
curl http://images.cocodataset.org/zips/val2017.zip -o val2017.zip
unzip val2017.zip
curl http://images.cocodataset.org/annotations/annotations_trainval2017.zip -o trainval_2017.zip
unzip trainval_2017.zip
```
##### coco_dataset content
```
coco_dataset/
|--annotations/
|-- captions_train2017.json
|-- captions_val2017.json
|-- instances_train2017.json
|-- instances_val2017.json
|-- person_keypoints_train2017.json
`-- person_keypoints_val2017.json
|-- val2017/
|-- 000000042102.jpg
|-- 000000060102.jpg
|-- 000000245102.jpg
...
`-- 000000364102.jpg
```
#### 1. Download yolo-v4-tf
```=bash
python3 /opt/intel/openvino_2021.3.394/deployment_tools/tools/model_downloader/downloader.py --name yolo-v4-tf -o /home/openvino/openvino_models
```
#### 2. Convert yolo-v4-tf to IR
```=bash
python3 /opt/intel/openvino_2021.3.394/deployment_tools/tools/model_downloader/converter.py --name yolo-v4-tf -d /home/openvino/openvino_models -o /home/openvino/openvino_models
```
#### 3. Run Accuracy Checker on yolo-v4-tf
```=bash
accuracy_check -c yolo-v4-tf-int8.yml -m /home/openvino/openvino_models/public/yolo-v4-tf/FP32 -d /opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/tools/accuracy_checker/dataset_definitions.yml -ss 300
```
#### 4. Run POT on yolo-v4-tf
```=bash
pot -c yolo-v4-tf-int8.json -e
```
#### 5. Copy yolo-v4-tf FP16-INT8 IR
```=bash
mkdir /home/openvino/openvino_models/public/yolo-v4-tf/FP16-INT8/
cp -ar results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/optimized/* /home/openvino/openvino_models/public/yolo-v4-tf/FP16-INT8/
```
#### Note : Copy openvino_models foler to /mnt folder. They will be accessable in ~/Downloads folder in the host and /mnt in the container.
## Referece
```
drwxr-xr-x 4 openvino openvino 4096 Apr 13 03:27 coco_dataset
drwxr-xr-x 3 root root 4096 Apr 13 03:36 openvino_models
drwxr-xr-x 4 root root 4096 Apr 13 05:44 results
results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/log.txt
results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/optimized
results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/optimized/yolo-v4-tf.xml
results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/optimized/yolo-v4-tf.bin
results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20/optimized/yolo-v4-tf.mapping
-rw-r--r-- 1 root root 591 Apr 13 05:44 yolo-v4-tf-int8.json
-rw-r--r-- 1 root root 1571 Apr 13 05:37 yolo-v4-tf-int8.yml
openvino_models/
`-- public
`-- yolo-v4-tf
|-- FP16
| |-- yolo-v4-tf.bin
| |-- yolo-v4-tf.mapping
| `-- yolo-v4-tf.xml
|-- FP16-INT8
| |-- yolo-v4-tf.bin
| |-- yolo-v4-tf.mapping
| `-- yolo-v4-tf.xml
|-- FP32
| |-- yolo-v4-tf.bin
| |-- yolo-v4-tf.mapping
| `-- yolo-v4-tf.xml
|-- keras-YOLOv3-model-set
| |-- cfg
| | `-- yolov4.cfg
| |-- common
| | |-- __pycache__
| | | `-- utils.cpython-38.pyc
| | |-- utils.py
| | `-- utils.py.orig
| |-- tools
| | `-- model_converter
| | |-- convert.py
| | |-- keras_to_tensorflow.py
| | `-- keras_to_tensorflow.py.orig
| `-- yolo4
| `-- models
| |-- __pycache__
| | `-- layers.cpython-38.pyc
| |-- layers.py
| `-- layers.py.orig
|-- yolo-v4.h5
|-- yolo-v4.pb
`-- yolov4.weights
```
### yolo-v4-tf-int8.yml
```
models:
- name: yolo_v4_tf
launchers:
- framework: dlsdk
device: CPU
adapter:
type: yolo_v3
anchors: 12,16,19,36,40,28,36,75,76,55,72,146,142,110,192,243,459,401
num: 3
coords: 4
classes: 80
threshold: 0.001
anchor_masks: [[0, 1, 2], [3, 4, 5], [6, 7, 8]]
raw_output: True
outputs:
- conv2d_93/BiasAdd/Add
- conv2d_101/BiasAdd/Add
- conv2d_109/BiasAdd/Add
datasets:
- name: ms_coco_detection_80_class_without_background
annotation_conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
data_source: /home/openvino/coco_dataset/val2017
preprocessing:
- type: resize
size: 608
postprocessing:
- type: resize_prediction_boxes
- type: filter
apply_to: prediction
min_confidence: 0.001
remove_filtered: true
- type: diou_nms
overlap: 0.5
- type: clip_boxes
apply_to: prediction
metrics:
- type: map
integral: 11point
ignore_difficult: true
presenter: print_scalar
- name: AP@0.5
type: coco_precision
max_detections: 100
threshold: 0.5
- name: AP@0.5:0.05:95
type: coco_precision
max_detections: 100
threshold: '0.5:0.05:0.95'
```
### yolo-v4-tf-int8.json
```
{
"model": {
"model_name": "yolo-v4-tf",
"model": "/home/openvino/openvino_models/public/yolo-v4-tf/FP16/yolo-v4-tf.xml",
"weights": "/home/openvino/openvino_models/public/yolo-v4-tf/FP16/yolo-v4-tf.bin"
},
"engine": {
"config": "/home/openvino/yolo-v4-tf-int8.yml"
},
"compression": {
"algorithms": [
{
"name": "DefaultQuantization",
"params": {
"preset": "performance",
"stat_subset_size": 300
}
}
]
}
}
```
### accuracy_checker log
```
accuracy_check -c yolo-v4-tf-int8.yml -m /home/openvino/openvino_models/public/yolo-v4-tf/FP32 -d /opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/tools/accuracy_checker/dataset_definitions.yml -ss 300
Processing info:
model: yolo_v4_tf
launcher: dlsdk
device: CPU
dataset: ms_coco_detection_80_class_without_background
OpenCV version: 4.5.2-openvino
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.508s
200 / 5000 processed in 0.465s
300 / 5000 processed in 0.453s
400 / 5000 processed in 0.445s
500 / 5000 processed in 0.440s
600 / 5000 processed in 0.437s
700 / 5000 processed in 0.435s
800 / 5000 processed in 0.433s
900 / 5000 processed in 0.432s
1000 / 5000 processed in 0.434s
1100 / 5000 processed in 0.433s
1200 / 5000 processed in 0.436s
1300 / 5000 processed in 0.435s
1400 / 5000 processed in 0.522s
1500 / 5000 processed in 0.509s
1600 / 5000 processed in 0.484s
1700 / 5000 processed in 0.469s
1800 / 5000 processed in 0.463s
1900 / 5000 processed in 0.457s
2000 / 5000 processed in 0.454s
2100 / 5000 processed in 0.453s
2200 / 5000 processed in 0.449s
2300 / 5000 processed in 0.450s
2400 / 5000 processed in 0.449s
2500 / 5000 processed in 0.448s
2600 / 5000 processed in 0.444s
2700 / 5000 processed in 0.445s
2800 / 5000 processed in 0.444s
2900 / 5000 processed in 0.441s
3000 / 5000 processed in 0.442s
3100 / 5000 processed in 0.439s
3200 / 5000 processed in 0.439s
3300 / 5000 processed in 0.436s
3400 / 5000 processed in 0.435s
3500 / 5000 processed in 0.435s
3600 / 5000 processed in 0.433s
3700 / 5000 processed in 0.431s
3800 / 5000 processed in 0.428s
3900 / 5000 processed in 0.428s
4000 / 5000 processed in 0.424s
4100 / 5000 processed in 0.423s
4200 / 5000 processed in 0.422s
4300 / 5000 processed in 0.419s
4400 / 5000 processed in 0.420s
4500 / 5000 processed in 0.416s
4600 / 5000 processed in 0.417s
4700 / 5000 processed in 0.413s
4800 / 5000 processed in 0.417s
4900 / 5000 processed in 0.418s
5000 / 5000 processed in 0.414s
5000 objects processed in 22.117 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
ms_coco_detection_80_class_without_background dataset metadata will be saved to mscoco_det_80.json
Converted annotation for ms_coco_detection_80_class_without_background dataset will be saved to mscoco_det_80.pickle
IE version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Found model /home/openvino/openvino_models/public/yolo-v4-tf/FP32/yolo-v4-tf.xml
Found weights /home/openvino/openvino_models/public/yolo-v4-tf/FP32/yolo-v4-tf.bin
Input info:
Layer name: image_input
precision: FP32
shape [1, 3, 608, 608]
Output info
Layer name: conv2d_101/BiasAdd/Add
precision: FP32
shape: [1, 255, 38, 38]
Layer name: conv2d_109/BiasAdd/Add
precision: FP32
shape: [1, 255, 19, 19]
Layer name: conv2d_93/BiasAdd/Add
precision: FP32
shape: [1, 255, 76, 76]
0%|▍ | 1/300 [00:00<04:54]05:37:51 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py:373: RuntimeWarning: overflow encountered in exp
prob_correct=lambda x: 1.0 / (1.0 + np.exp(-x)))
300 objects processed in 317.375 seconds
map: 71.78%
AP@0.5: 78.38%
AP@0.5:0.05:95: 53.79%
```
### pot log
```
pot -c yolo-v4-tf-int8.json -e
05:44:19 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/networkx/classes/graph.py:23: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping
05:44:19 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/networkx/classes/reportviews.py:95: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping, Set, Iterable
05:44:20 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/optimization/algorithm.py:41: UserWarning: Nevergrad package could not be imported. If you are planning to useany hyperparameter optimization algo, consider installing itusing pip. This implies advanced usage of the tool.Note that nevergrad is compatible only with Python 3.6+
warnings.warn(
05:44:20 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/past/builtins/misc.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import reload
INFO:app.run:Output log dir: ./results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20
INFO:app.run:Creating pipeline:
Algorithm: DefaultQuantization
Parameters:
preset : performance
stat_subset_size : 300
target_device : ANY
model_type : None
dump_intermediate_model : False
exec_log_dir : ./results/yolo-v4-tf_DefaultQuantization/2021-04-13_05-44-20
===========================================================================
IE version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.534s
200 / 5000 processed in 0.484s
300 / 5000 processed in 0.464s
400 / 5000 processed in 0.455s
500 / 5000 processed in 0.449s
600 / 5000 processed in 0.450s
700 / 5000 processed in 0.449s
800 / 5000 processed in 0.467s
900 / 5000 processed in 0.490s
1000 / 5000 processed in 0.468s
1100 / 5000 processed in 0.464s
1200 / 5000 processed in 0.453s
1300 / 5000 processed in 0.452s
1400 / 5000 processed in 0.449s
1500 / 5000 processed in 0.451s
1600 / 5000 processed in 0.448s
1700 / 5000 processed in 0.450s
1800 / 5000 processed in 0.448s
1900 / 5000 processed in 0.450s
2000 / 5000 processed in 0.450s
2100 / 5000 processed in 0.448s
2200 / 5000 processed in 0.453s
2300 / 5000 processed in 0.448s
2400 / 5000 processed in 0.455s
2500 / 5000 processed in 0.446s
2600 / 5000 processed in 0.447s
2700 / 5000 processed in 0.458s
2800 / 5000 processed in 0.450s
2900 / 5000 processed in 0.448s
3000 / 5000 processed in 0.447s
3100 / 5000 processed in 0.444s
3200 / 5000 processed in 0.442s
3300 / 5000 processed in 0.442s
3400 / 5000 processed in 0.438s
3500 / 5000 processed in 0.439s
3600 / 5000 processed in 0.437s
3700 / 5000 processed in 0.436s
3800 / 5000 processed in 0.442s
3900 / 5000 processed in 0.437s
4000 / 5000 processed in 0.432s
4100 / 5000 processed in 0.429s
4200 / 5000 processed in 0.432s
4300 / 5000 processed in 0.426s
4400 / 5000 processed in 0.424s
4500 / 5000 processed in 0.422s
4600 / 5000 processed in 0.420s
4700 / 5000 processed in 0.420s
4800 / 5000 processed in 0.417s
4900 / 5000 processed in 0.418s
5000 / 5000 processed in 0.418s
5000 objects processed in 22.341 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
INFO:compression.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.pipeline.pipeline:Start algorithm: DefaultQuantization
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
05:45:06 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/mo/back/ie_ir_ver_2/emitter.py:243: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if len(element.attrib) == 0 and len(element.getchildren()) == 0:
05:45:37 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py:373: RuntimeWarning: overflow encountered in exp
prob_correct=lambda x: 1.0 / (1.0 + np.exp(-x)))
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.pipeline.pipeline:Finished: DefaultQuantization
===========================================================================
INFO:compression.pipeline.pipeline:Evaluation of generated model
INFO:compression.engines.ac_engine:Start inference on the whole dataset
Total dataset size: 5000
1000 / 5000 processed in 1028.772s
2000 / 5000 processed in 1026.357s
3000 / 5000 processed in 1026.659s
4000 / 5000 processed in 1030.753s
5000 / 5000 processed in 1026.593s
5000 objects processed in 5139.134 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:app.run:map : 0.7059990099769229
INFO:app.run:AP@0.5 : 0.7724924313572912
INFO:app.run:AP@0.5:0.05:95 : 0.4788317444262568
```
#### Replace weights and cfg
I0415 13:47:35.862991 140451038899072 keras_to_tensorflow.py:183] Saved the freezed graph at /home/openvino/openvino_models/public/yolo-v4-tf/yolo-v4.pb
ls /home/openvino/openvino_models/public/yolo-v4-tf/
FP16 FP32 keras-YOLOv3-model-set yolo-v4.h5 yolo-v4.pb yolov4.weights
/home/openvino/openvino_models/public/yolo-v4-tf/keras-YOLOv3-model-set/cfg/yolov4.cfg
/usr/bin/python3 -- /opt/intel/openvino/deployment_tools/model_optimizer/mo.py
--framework=tf --data_type=FP16
--output_dir=/home/openvino/openvino_models/public/yolo-v4-tf/FP16 --model_name=yolo-v4-tf
'--input_shape=[1,608,608,3]'
--input=image_input '--scale_values=image_input[255]'
--reverse_input_channels --input_model=/home/openvino/openvino_models/public/yolo-v4-tf/yolo-v4.pb
/opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/models/public/yolo-v4-tf/model.yml
```
model_optimizer_args:
- --input_shape=[1,608,608,3]
- --input=image_input
- --scale_values=image_input[255]
- --reverse_input_channels
- --input_model=$conv_dir/yolo-v4.pb
````