# Use POT to Quantize yolo-v5l-v5 Public Model
###### tags: `POT`
## Use OpenVINO dockerhub image
```
docker run -it -v ~/Downloads:/mnt -u root --rm openvino/ubuntu20_data_dev:latest
```
## Run Accuracy Checker and POT
In ubuntu20_data_dev docker image,
#### 0. Download COCO 2017 trainval dataset and annotation
```=bash
cd /home/openvino
apt update
apt install git
apt install wget
apt install unzip
mkdir /home/openvino/coco_dataset
cd /home/openvino/coco_dataset
curl http://images.cocodataset.org/zips/val2017.zip -o val2017.zip
unzip val2017.zip
curl http://images.cocodataset.org/annotations/annotations_trainval2017.zip -o trainval_2017.zip
unzip trainval_2017.zip
```
##### /home/openvino/coco_dataset content
```
/home/openvino/coco_dataset/
|--annotations/
|-- captions_train2017.json
|-- captions_val2017.json
|-- instances_train2017.json
|-- instances_val2017.json
|-- person_keypoints_train2017.json
`-- person_keypoints_val2017.json
|-- val2017/
|-- 000000042102.jpg
|-- 000000060102.jpg
|-- 000000245102.jpg
...
`-- 000000364102.jpg
```
#### 1. Download and Convert yolo-v5 to IR
```bash=
# Download
cd /home/openvino
git clone https://github.com/ultralytics/yolov5 -b v5.0
cd yolov5/
pip install -r requirements.txt
pip install onnx
mkdir -p /home/openvino/openvino_models/public/yolo-v5l-v5-pt
wget https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5l.pt -O /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolov5l.pt
python3 models/export.py --weights /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolov5l.pt --img 640 --batch 1
# Convert to IR
mkdir -p /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP32
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolov5l.onnx --model_name yolo-v5l-v5-pt -s 255 --reverse_input_channels --output Conv_403,Conv_419,Conv_435 --data_type FP32 --output_dir /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP32
mkdir /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16
python3 /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolov5l.onnx --model_name yolo-v5l-v5-pt -s 255 --reverse_input_channels --output Conv_403,Conv_419,Conv_435 --data_type FP16 --output_dir /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16
```
#### 2. Run Accuracy Checker on yolo-v5
```=bash
cd /home/openvino
accuracy_check -c /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolo-v5l-v5-int8.yml -m /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.xml -d /opt/intel/openvino_2021/deployment_tools/open_model_zoo/data/dataset_definitions.yml -ss 300
```
#### 3. Run POT on yolo-v5
```=bash
cd /home/openvino/
pot -c /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolo-v5l-v5-int8.json -e
```
#### 5. Copy yolo-v5 FP16-INT8 IR
```=bash
mkdir /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16-INT8
cp -ar results/yolo_v5s-v5_DefaultQuantization/2021-09-01_13-06-53/optimized/* /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16-INT8/
```
#### Note : Copy openvino_models foler to /mnt folder. They will be accessable in ~/Downloads folder in the host and /mnt in the container.
## Reference
```
drwxr-xr-x 4 root root 4096 Sep 1 12:46 coco_dataset
drwxr-xr-x 4 root root 4096 Sep 1 12:52 openvino_models
drwxr-xr-x 3 root root 4096 Sep 1 13:06 results
drwxr-xr-x 8 root root 4096 Sep 1 12:47 yolov5
openvino_models/
|-- public
| `-- yolo-v5l-v5-pt
| |-- FP16
| | |-- yolo-v5l-v5-pt.bin
| | |-- yolo-v5l-v5-pt.mapping
| | `-- yolo-v5l-v5-pt.xml
| |-- FP16-INT8
| | |-- yolo_v5s-v5.bin
| | |-- yolo_v5s-v5.mapping
| | `-- yolo_v5s-v5.xml
| |-- FP32
| | |-- yolo-v5l-v5-pt.bin
| | |-- yolo-v5l-v5-pt.mapping
| | `-- yolo-v5l-v5-pt.xml
| |-- yolo-v5l-v5-int8.json
| |-- yolo-v5l-v5-int8.yml
| |-- yolov5l.onnx
| |-- yolov5l.pt
| `-- yolov5l.torchscript.pt
```
### yolo-v5l-v5-int8.yml
```
models:
- name: yolo_v5
launchers:
- framework: dlsdk # OpenVINO™ launcher
device: CPU # Specifies device for inference
adapter: # Approach to convert YOLO v5 raw output to DetectionPrediction for postprocessing, metrics calculation
type: yolo_v5 # Adapter name
anchors: 10,13,16,30,33,23,
30,61,62,45,59,119,
116,90,156,198,373,326 # Anchor values provided as comma-separated list
num: 3 # Num parameter from DarkNet configuration file
coords: 4 # Number of bbox coordinates
classes: 80 # Number of detection classes
anchor_masks: [[0, 1, 2],
[3, 4, 5],
[6, 7, 8]] # Mask for used anchors for each output layer
raw_output: True # Enabling additional preprocessing for raw YOLO output format
cells: [80, 40, 20] # Sets grid size for each layer, according outputs filed
outputs: # List of output layers names, the order must consistent with order of cells
- Conv_403
- Conv_419
- Conv_435
datasets:
- name: ms_coco_detection_80_class_without_background # Unique identifier of dataset in global dataset definitons
annotation_conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
data_source: /home/openvino/coco_dataset/val2017
subsample_size: 300
preprocessing: #List of preprocessing steps applied to input data before inference
- type: resize #Resizing the image to a new width and height
size: 640
aspect_ratio_scale: fit_to_window
- type: padding #Padding for image
dst_height: 640
dst_width: 640
pad_value: 114,114,114
pad_type: right_bottom
postprocessing: #List of postprocessing steps appied to output data after inference
- type: faster_rcnn_postprocessing_resize # Resizing normalized detection prediction boxes according to the original image size before preprocessing step
dst_height: 640
dst_width: 640
- type: filter #Filtering data using different parameters
apply_to: prediction
min_confidence: 0.001
remove_filtered: true
- type: nms #Non-maximum suppression
overlap: 0.65
- type: clip_boxes #Clipping detection bounding box sizes
apply_to: prediction
metrics: #Compare predicted data with annotated data and perform quality measurement
- name: AP@0.5 #MS COCO Metrics: average precision over at a single IoU of .50
type: coco_precision
max_detections: 100
threshold: 0.5
- name: AP@0.5:0.05:95 #MS COCO Metrics: average precision over 10 IoU thresholds (i.e., 0.50, 0.55, 0.60, …, 0.95)
type: coco_precision
max_detections: 100
threshold: '0.5:0.05:0.95'
```
### yolo-v5l-v5-int8.json
```
{
"model": {
"model_name": "yolo_v5s-v5",
"model": "/home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.xml",
"weights": "/home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.bin"
},
"engine": {
"config": "/home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolo-v5l-v5-int8.yml"
},
"compression": {
"target_device": "CPU",
"algorithms": [
{
"name": "DefaultQuantization",
"params": {
"preset": "mixed",
"stat_subset_size": 300
}
}
]
}
}
```
### accuracy_checker log
```
accuracy_check -c /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolo-v5l-v5-int8.yml -m /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.xml -d /opt/intel/openvino_2021/deployment_tools/open_model_zoo/data/dataset_definitions.yml -ss 300
Processing info:
model: yolo_v5
launcher: dlsdk
device: CPU
dataset: ms_coco_detection_80_class_without_background
OpenCV version: 4.5.3-openvino
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.516s
200 / 5000 processed in 0.476s
300 / 5000 processed in 0.461s
400 / 5000 processed in 0.453s
500 / 5000 processed in 0.448s
600 / 5000 processed in 0.445s
700 / 5000 processed in 0.443s
800 / 5000 processed in 0.444s
900 / 5000 processed in 0.440s
1000 / 5000 processed in 0.441s
1100 / 5000 processed in 0.491s
1200 / 5000 processed in 0.465s
1300 / 5000 processed in 0.457s
1400 / 5000 processed in 0.454s
1500 / 5000 processed in 0.452s
1600 / 5000 processed in 0.452s
1700 / 5000 processed in 0.449s
1800 / 5000 processed in 0.448s
1900 / 5000 processed in 0.450s
2000 / 5000 processed in 0.454s
2100 / 5000 processed in 0.555s
2200 / 5000 processed in 0.531s
2300 / 5000 processed in 0.506s
2400 / 5000 processed in 0.482s
2500 / 5000 processed in 0.476s
2600 / 5000 processed in 0.468s
2700 / 5000 processed in 0.465s
2800 / 5000 processed in 0.462s
2900 / 5000 processed in 0.459s
3000 / 5000 processed in 0.457s
3100 / 5000 processed in 0.454s
3200 / 5000 processed in 0.454s
3300 / 5000 processed in 0.451s
3400 / 5000 processed in 0.448s
3500 / 5000 processed in 0.444s
3600 / 5000 processed in 0.444s
3700 / 5000 processed in 0.440s
3800 / 5000 processed in 0.438s
3900 / 5000 processed in 0.438s
4000 / 5000 processed in 0.435s
4100 / 5000 processed in 0.433s
4200 / 5000 processed in 0.431s
4300 / 5000 processed in 0.429s
4400 / 5000 processed in 0.427s
4500 / 5000 processed in 0.427s
4600 / 5000 processed in 0.427s
4700 / 5000 processed in 0.424s
4800 / 5000 processed in 0.423s
4900 / 5000 processed in 0.419s
5000 / 5000 processed in 0.420s
5000 objects processed in 22.708 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
ms_coco_detection_80_class_without_background dataset metadata will be saved to mscoco_det_80.json
Converted annotation for ms_coco_detection_80_class_without_background dataset will be saved to mscoco_det_80.pickle
IE version: 2021.4.0-3839-cd81789d294-releases/2021/4
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.4.0-3839-cd81789d294-releases/2021/4
Found model /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.xml
Found weights /home/openvino/openvino_models/public/yolo-v5l-v5-pt/FP16/yolo-v5l-v5-pt.bin
Input info:
Layer name: images
precision: FP32
shape [1, 3, 640, 640]
Output info
Layer name: Conv_403
precision: FP32
shape: [1, 255, 80, 80]
Layer name: Conv_419
precision: FP32
shape: [1, 255, 40, 40]
Layer name: Conv_435
precision: FP32
shape: [1, 255, 20, 20]
12:57:28 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/open_model_zoo/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py:151: DeprecationWarning: threshold option is deprecated. Please use abs_threshold instead
warnings.warn('threshold option is deprecated. Please use abs_threshold instead', DeprecationWarning)
300 objects processed in 379.103 seconds
AP@0.5: 71.58%
AP@0.5:0.05:95: 52.53%
```
### pot log
```
pot -c /home/openvino/yolov5l-v5/yolo-v5l-v5-int8.json -e
13:06:20 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/defusedxml/__init__.py:30: DeprecationWarning: defusedxml.cElementTree is deprecated, import from defusedxml.ElementTree instead.
from . import cElementTree
13:06:20 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/optimization/algorithm.py:38: UserWarning: Nevergrad package could not be imported. If you are planning to useany hyperparameter optimization algo, consider installing itusing pip. This implies advanced usage of the tool.Note that nevergrad is compatible only with Python 3.6+
warnings.warn(
13:06:20 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/past/builtins/misc.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import reload
Traceback (most recent call last):
File "/usr/local/bin/pot", line 11, in <module>
load_entry_point('pot==1.0', 'console_scripts', 'pot')()
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 36, in main
app(sys.argv[1:])
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/app/run.py", line 47, in app
config = Config.read_config(args.config)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/configs/config.py", line 42, in read_config
data = cls._read_config_from_file(path)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/configs/config.py", line 38, in _read_config_from_file
return read_config_from_file(path)
File "/opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/utils/config_reader.py", line 24, in read_config_from_file
with path.open() as f:
File "/usr/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/usr/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/openvino/yolov5l-v5/yolo-v5l-v5-int8.json'
root@1910c3298e6d:/home/openvino# pot -c /home/openvino/openvino_models/public/yolo-v5l-v5-pt/yolo-v5l-v5-int8.json -e
13:06:53 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/defusedxml/__init__.py:30: DeprecationWarning: defusedxml.cElementTree is deprecated, import from defusedxml.ElementTree instead.
from . import cElementTree
13:06:53 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/compression/algorithms/quantization/optimization/algorithm.py:38: UserWarning: Nevergrad package could not be imported. If you are planning to useany hyperparameter optimization algo, consider installing itusing pip. This implies advanced usage of the tool.Note that nevergrad is compatible only with Python 3.6+
warnings.warn(
13:06:53 accuracy_checker WARNING: /usr/local/lib/python3.8/dist-packages/past/builtins/misc.py:45: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
from imp import reload
INFO:app.run:Output log dir: ./results/yolo_v5s-v5_DefaultQuantization/2021-09-01_13-06-53
INFO:app.run:Creating pipeline:
Algorithm: DefaultQuantization
Parameters:
preset : mixed
stat_subset_size : 300
target_device : CPU
model_type : None
dump_intermediate_model : False
exec_log_dir : ./results/yolo_v5s-v5_DefaultQuantization/2021-09-01_13-06-53
===========================================================================
IE version: 2021.4.0-3839-cd81789d294-releases/2021/4
Loaded CPU plugin version:
CPU - MKLDNNPlugin: 2.1.2021.4.0-3839-cd81789d294-releases/2021/4
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been started
Parameters to be used for conversion:
converter: mscoco_detection
annotation_file: /home/openvino/coco_dataset/annotations/instances_val2017.json
Total annotations size: 5000
100 / 5000 processed in 0.526s
200 / 5000 processed in 0.477s
300 / 5000 processed in 0.461s
400 / 5000 processed in 0.454s
500 / 5000 processed in 0.448s
600 / 5000 processed in 0.446s
700 / 5000 processed in 0.444s
800 / 5000 processed in 0.443s
900 / 5000 processed in 0.443s
1000 / 5000 processed in 0.500s
1100 / 5000 processed in 0.500s
1200 / 5000 processed in 0.475s
1300 / 5000 processed in 0.467s
1400 / 5000 processed in 0.461s
1500 / 5000 processed in 0.458s
1600 / 5000 processed in 0.456s
1700 / 5000 processed in 0.454s
1800 / 5000 processed in 0.453s
1900 / 5000 processed in 0.451s
2000 / 5000 processed in 0.453s
2100 / 5000 processed in 0.453s
2200 / 5000 processed in 0.455s
2300 / 5000 processed in 0.453s
2400 / 5000 processed in 0.454s
2500 / 5000 processed in 0.462s
2600 / 5000 processed in 0.456s
2700 / 5000 processed in 0.454s
2800 / 5000 processed in 0.453s
2900 / 5000 processed in 0.452s
3000 / 5000 processed in 0.451s
3100 / 5000 processed in 0.450s
3200 / 5000 processed in 0.483s
3300 / 5000 processed in 0.487s
3400 / 5000 processed in 0.468s
3500 / 5000 processed in 0.461s
3600 / 5000 processed in 0.458s
3700 / 5000 processed in 0.452s
3800 / 5000 processed in 0.450s
3900 / 5000 processed in 0.444s
4000 / 5000 processed in 0.442s
4100 / 5000 processed in 0.440s
4200 / 5000 processed in 0.437s
4300 / 5000 processed in 0.436s
4400 / 5000 processed in 0.433s
4500 / 5000 processed in 0.432s
4600 / 5000 processed in 0.443s
4700 / 5000 processed in 0.434s
4800 / 5000 processed in 0.429s
4900 / 5000 processed in 0.428s
5000 / 5000 processed in 0.426s
5000 objects processed in 22.749 seconds
Annotation conversion for ms_coco_detection_80_class_without_background dataset has been finished
13:07:18 accuracy_checker WARNING: /opt/intel/openvino/deployment_tools/tools/post_training_optimization_toolkit/libs/open_model_zoo/tools/accuracy_checker/accuracy_checker/metrics/metric_executor.py:151: DeprecationWarning: threshold option is deprecated. Please use abs_threshold instead
warnings.warn('threshold option is deprecated. Please use abs_threshold instead', DeprecationWarning)
INFO:compression.statistics.collector:Start computing statistics for algorithms : DefaultQuantization
INFO:compression.statistics.collector:Computing statistics finished
INFO:compression.pipeline.pipeline:Start algorithm: DefaultQuantization
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithm : ActivationChannelAlignment
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.algorithms.quantization.default.algorithm:Start computing statistics for algorithms : MinMaxQuantization,FastBiasCorrection
INFO:compression.algorithms.quantization.default.algorithm:Computing statistics finished
INFO:compression.pipeline.pipeline:Finished: DefaultQuantization
===========================================================================
INFO:compression.pipeline.pipeline:Evaluation of generated model
INFO:compression.engines.ac_engine:Start inference on the whole dataset
Total dataset size: 300
300 objects processed in 336.181 seconds
INFO:compression.engines.ac_engine:Inference finished
INFO:app.run:AP@0.5 : 0.7210738061649291
INFO:app.run:AP@0.5:0.05:95 : 0.5204118142999177
```