# Use POT to Quantize yolo-v3-tiny-tf Public Model
###### tags: `POT`
## Use OpenVINO dockerhub image
```
docker run -it -v ~/Downloads:/mnt -u root --rm openvino/ubuntu20_data_dev:latest
```
## Run Accuracy Checker and POT
In ubuntu20_data_dev docker image,
#### 0. Download COCO 2017 trainval dataset and annotation
cd /home/openvino
apt update
apt install unzip
mkdir coco_dataset
cd coco_dataset
curl http://images.cocodataset.org/zips/val2017.zip -o val2017.zip
unzip val2017.zip
curl http://images.cocodataset.org/annotations/annotations_trainval2017.zip -o trainval_2017.zip
unzip trainval_2017.zip
##### coco_dataset content
```
coco_dataset/
|--annotations/
|-- captions_train2017.json
|-- captions_val2017.json
|-- instances_train2017.json
|-- instances_val2017.json
|-- person_keypoints_train2017.json
`-- person_keypoints_val2017.json
|-- val2017/
|-- 000000042102.jpg
|-- 000000060102.jpg
|-- 000000245102.jpg
...
`-- 000000364102.jpg
```
#### 1. Download yolo-v3-tiny-tf
python3 /opt/intel/openvino_2021.3.394/deployment_tools/tools/model_downloader/downloader.py --name yolo-v3-tiny-tf -o /home/openvino/openvino_models
#### 2. Convert yolo-v3-tiny-tf to IR
python3 /opt/intel/openvino_2021.3.394/deployment_tools/tools/model_downloader/converter.py --name yolo-v3-tiny-tf -d /home/openvino/openvino_models -o /home/openvino/openvino_models
#### 3. Run Accuracy Checker on yolo-v3-tiny-tf
accuracy_check -c yolo-v3-tf-tiny-int8.yml -m /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16 -d /opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/tools/accuracy_checker/dataset_definitions.yml -ss 300
#### 4. Run POT on yolo-v3-tiny-tf
pot -c yolo-v3-tf-tiny-int8.json -e
#### 5. Copy yolo-v3-tiny-tf FP16-INT8 IR
mkdir /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16-INT8/
cp -ar results/yolo-v3-tiny-tf_DefaultQuantization/2021-04-13_04-01-17/optimized/* /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16-INT8/
## Referece
```
drwxr-xr-x 4 openvino openvino 4096 Apr 13 03:27 coco_dataset
drwxr-xr-x 3 root root 4096 Apr 13 03:36 openvino_models
drwxr-xr-x 3 root root 4096 Apr 13 04:01 results
./results/yolo-v3-tf_DefaultQuantization/2021-04-13_04-01-17/log.txt
./results/yolo-v3-tf_DefaultQuantization/2021-04-13_04-01-17/optimized
./results/yolo-v3-tf_DefaultQuantization/2021-04-13_04-01-17/optimized/yolo-v3-tf.mapping
./results/yolo-v3-tf_DefaultQuantization/2021-04-13_04-01-17/optimized/yolo-v3-tf.bin
./results/yolo-v3-tf_DefaultQuantization/2021-04-13_04-01-17/optimized/yolo-v3-tf.xml
-rw-r--r-- 1 root root 591 Apr 13 03:54 yolo-v3-tf-int8.json
-rw-r--r-- 1 root root 1386 Apr 13 03:53 yolo-v3-tf-int8.yml
openvino_models/
`-- public
`-- yolo-v3-tf
|-- FP16
| |-- yolo-v3-tf.bin
| |-- yolo-v3-tf.mapping
| `-- yolo-v3-tf.xml
|-- FP16-INT8
| |-- yolo-v3-tf.bin
| |-- yolo-v3-tf.mapping
| `-- yolo-v3-tf.xml
|-- FP32
| |-- yolo-v3-tf.bin
| |-- yolo-v3-tf.mapping
| `-- yolo-v3-tf.xml
|-- yolo-v3.json
`-- yolo-v3.pb
```
Note : Copy openvino_models foler to /mnt folder. They will be accessable in ~/Downloads folder in the host and /mnt in the container.
### yolo-v3-tiny-tf-int8.yml
```
models:
- name: yolo_v3_tiny_tf
launchers:
- framework: dlsdk
device: CPU
adapter:
type: yolo_v3
anchors: tiny_yolo_v3
num: 3
coords: 4
classes: 80
anchor_masks: [[3, 4, 5], [1, 2, 3]]
outputs:
- conv2d_9/Conv2D/YoloRegion
- conv2d_12/Conv2D/YoloRegion
datasets:
- name: ms_coco_detection_80_class_without_background
annotation_conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
data_source: /home/openvino/coco_dataset/val2017
preprocessing:
- type: resize
size: 416
postprocessing:
- type: resize_prediction_boxes
- type: filter
apply_to: prediction
min_confidence: 0.001
remove_filtered: True
- type: nms
overlap: 0.5
- type: clip_boxes
apply_to: prediction
metrics:
- type: map
integral: 11point
ignore_difficult: true
presenter: print_scalar
- type: coco_precision
max_detections: 100
threshold: 0.5
```
### yolo-v3-tf-tiny-int8.json
```
{
"model": {
"model_name": "yolo-v3-tiny-tf",
"model": "/home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16/yolo-v3-tiny-tf.xml",
"weights": "/home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16/yolo-v3-tiny-tf.bin"
},
"engine": {
"config": "/home/openvino/yolo-v3-tf-tiny-int8.yml"
},
"compression": {
"algorithms": [
{
"name": "DefaultQuantization",
"params": {
"preset": "performance",
"stat_subset_size": 300
}
}
]
}
}
```
### accuracy_checker log
```
accuracy_check -c yolo-v3-tf-tiny-int8.yml -m /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16 -d /opt/intel/openvino_2021.3.394/deployment_tools/open_model_zoo/tools/accuracy_checker/dataset_definitions.yml -ss 300
Processing info:
model: yolo_v3_tiny_tf
launcher: dlsdk
device: CPU
dataset: ms_coco_detection_80_class_without_background
OpenCV version: 4.5.2-openvino
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.523s
200 / 5000 processed in 0.475s
300 / 5000 processed in 0.458s
400 / 5000 processed in 0.450s
500 / 5000 processed in 0.441s
600 / 5000 processed in 0.443s
700 / 5000 processed in 0.438s
800 / 5000 processed in 0.437s
900 / 5000 processed in 0.437s
1000 / 5000 processed in 0.440s
1100 / 5000 processed in 0.444s
1200 / 5000 processed in 0.473s
1300 / 5000 processed in 0.472s
1400 / 5000 processed in 0.541s
1500 / 5000 processed in 0.514s
1600 / 5000 processed in 0.485s
1700 / 5000 processed in 0.475s
1800 / 5000 processed in 0.464s
1900 / 5000 processed in 0.462s
2000 / 5000 processed in 0.461s
2100 / 5000 processed in 0.457s
2200 / 5000 processed in 0.453s
2300 / 5000 processed in 0.452s
2400 / 5000 processed in 0.450s
2500 / 5000 processed in 0.448s
2600 / 5000 processed in 0.455s
2700 / 5000 processed in 0.449s
2800 / 5000 processed in 0.451s
2900 / 5000 processed in 0.445s
3000 / 5000 processed in 0.447s
3100 / 5000 processed in 0.442s
3200 / 5000 processed in 0.444s
3300 / 5000 processed in 0.443s
3400 / 5000 processed in 0.438s
3500 / 5000 processed in 0.443s
3600 / 5000 processed in 0.436s
3700 / 5000 processed in 0.437s
3800 / 5000 processed in 0.433s
3900 / 5000 processed in 0.433s
4000 / 5000 processed in 0.428s
4100 / 5000 processed in 0.427s
4200 / 5000 processed in 0.425s
4300 / 5000 processed in 0.422s
4400 / 5000 processed in 0.426s
4500 / 5000 processed in 0.419s
4600 / 5000 processed in 0.420s
4700 / 5000 processed in 0.415s
4800 / 5000 processed in 0.415s
4900 / 5000 processed in 0.413s
5000 / 5000 processed in 0.410s
5000 objects processed in 22.412 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
ms_coco_detection_80_class_without_background dataset metadata will be saved to mscoco_det_80.json
Converted annotation for ms_coco_detection_80_class_without_background dataset will be saved to mscoco_det_80.pickle
IE version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Found model /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16/yolo-v3-tiny-tf.xml
Found weights /home/openvino/openvino_models/public/yolo-v3-tiny-tf/FP16/yolo-v3-tiny-tf.bin
Input info:
Layer name: image_input
precision: FP32
shape [1, 3, 416, 416]
Output info
Layer name: conv2d_12/Conv2D/YoloRegion
precision: FP32
shape: [1, 255, 26, 26]
Layer name: conv2d_9/Conv2D/YoloRegion
precision: FP32
shape: [1, 255, 13, 13]
04:20:46 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py:361: UserWarning: Number of output layers (2) not equal to detection grid size (3). Must be equal with each other, if output tensor resize is required
warnings.warn('Number of output layers ({}) not equal to detection grid size ({}). '
300 objects processed in 65.597 seconds
map: 42.17%
```
### pot log
```
pot -c yolo-v3-tf-tiny-int8.json -e
04:25:28 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/networkx/classes/graph.py:23: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping
04:25:28 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/networkx/classes/reportviews.py:95: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.9 it will stop working
from collections import Mapping, Set, Iterable
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/back/ReverseInputChannels.py:112: DeprecationWarning: invalid escape sequence \
"""
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/back/ReverseInputChannels.py:182: DeprecationWarning: invalid escape sequence \
"""
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/back/ReverseInputChannels.py:283: DeprecationWarning: invalid escape sequence \
"""
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/mo/front/tf/graph_utils.py:159: DeprecationWarning: invalid escape sequence \*
"""
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/extensions/back/compress_quantized_weights.py:30: DeprecationWarning: invalid escape sequence \
"""
04:25:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/optimization/algorithm.py:41: UserWarning: Nevergrad package could not be imported. If you are planning to useany hyperparameter optimization algo, consider installing itusing pip. This implies advanced usage of the tool.Note that nevergrad is compatible only with Python 3.6+
warnings.warn(
04:25:28 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/past/builtins/misc.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import reload
INFO:app.run:Output log dir: ./results/yolo-v3-tiny-tf_DefaultQuantization/2021-05-24_04-25-29
INFO:app.run:Creating pipeline:
Algorithm: DefaultQuantization
Parameters:
preset : performance
stat_subset_size : 300
target_device : ANY
model_type : None
dump_intermediate_model : False
exec_log_dir : ./results/yolo-v3-tiny-tf_DefaultQuantization/2021-05-24_04-25-29
===========================================================================
IE version: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.3.0-2787-60059f2c755-releases/2021/3
04:25:29 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/adapters/yolo.py:361: UserWarning: Number of output layers (2) not equal to detection grid size (3). Must be equal with each other, if output tensor resize is required
warnings.warn('Number of output layers ({}) not equal to detection grid size ({}). '
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.524s
200 / 5000 processed in 0.480s
300 / 5000 processed in 0.463s
400 / 5000 processed in 0.455s
500 / 5000 processed in 0.465s
600 / 5000 processed in 0.457s
700 / 5000 processed in 0.446s
800 / 5000 processed in 0.446s
900 / 5000 processed in 0.443s
1000 / 5000 processed in 0.443s
1100 / 5000 processed in 0.443s
1200 / 5000 processed in 0.441s
1300 / 5000 processed in 0.447s
1400 / 5000 processed in 0.443s
1500 / 5000 processed in 0.445s
1600 / 5000 processed in 0.446s
1700 / 5000 processed in 0.445s
1800 / 5000 processed in 0.506s
1900 / 5000 processed in 0.533s
2000 / 5000 processed in 0.510s
2100 / 5000 processed in 0.489s
2200 / 5000 processed in 0.478s
2300 / 5000 processed in 0.472s
2400 / 5000 processed in 0.466s
2500 / 5000 processed in 0.468s
2600 / 5000 processed in 0.463s
2700 / 5000 processed in 0.462s
2800 / 5000 processed in 0.460s
2900 / 5000 processed in 0.456s
3000 / 5000 processed in 0.456s
3100 / 5000 processed in 0.455s
3200 / 5000 processed in 0.486s
3300 / 5000 processed in 0.470s
3400 / 5000 processed in 0.463s
3500 / 5000 processed in 0.454s
3600 / 5000 processed in 0.449s
3700 / 5000 processed in 0.445s
3800 / 5000 processed in 0.443s
3900 / 5000 processed in 0.440s
4000 / 5000 processed in 0.486s
4100 / 5000 processed in 0.488s
4200 / 5000 processed in 0.465s
4300 / 5000 processed in 0.458s
4400 / 5000 processed in 0.447s
4500 / 5000 processed in 0.442s
4600 / 5000 processed in 0.436s
4700 / 5000 processed in 0.433s
4800 / 5000 processed in 0.432s
4900 / 5000 processed in 0.429s
5000 / 5000 processed in 0.427s
5000 objects processed in 22.998 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
INFO:compression.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.pipeline.pipeline:Start algorithm: DefaultQuantization
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
04:25:55 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/model_optimizer/mo/back/ie_ir_ver_2/emitter.py:243: DeprecationWarning: This method will be removed in future versions. Use 'list(elem)' or iteration over elem instead.
if len(element.attrib) == 0 and len(element.getchildren()) == 0:
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.pipeline.pipeline:Finished: DefaultQuantization
===========================================================================
INFO:compression.pipeline.pipeline:Evaluation of generated model
INFO:compression.engines.ac_engine:Start inference on the whole dataset
Total dataset size: 5000
1000 / 5000 processed in 249.644s
2000 / 5000 processed in 254.044s
3000 / 5000 processed in 249.483s
4000 / 5000 processed in 250.975s
5000 / 5000 processed in 253.835s
5000 objects processed in 1257.981 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:app.run:map : 0.35264698005362616
INFO:app.run:coco_precision : 0.39186818146951863
```