---
title: How to use ssd_mobilenet_v2_coco for OpenVINO
tags: ROS
description: How to use ssd_mobilenet_v2_coco for OpenVINO
---
:::success
<font size=6>
How to use ssd_mobilenet_v2_coco for OpenVINO
</font>
:::
[TOC]
<br/>
<br/>
#### object_detection (ssd_mobilenet_v2_coco)
* Setup ROS environment
Command:
```
source /opt/ros/foxy/setup.bash
source /opt/intel/openvino_2021/bin/setupvars.sh
cd ~/my_ros2_ws/src
source ./install/local_setup.bash
```
* Download the optimized Intermediate Representation (IR) of model (execute once), for example:
Command:
```
cd /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader
sudo python3 downloader.py --name ssd_mobilenet_v2_coco --output_dir /opt/openvino_toolkit/models
```
Result:
```
################|| Downloading ssd_mobilenet_v2_coco ||################
========== Downloading /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.tar.gz
... 100%, 183521 KB, 4719 KB/s, 38 seconds passed
========== Unpacking /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco.tar.gz
```
* If the model (tensorflow, caffe, MXNet, ONNX, Kaldi)need to be converted to intermediate representation (For example the model for object detection)
(Note: Tensorflow=1.15.5, Python<=3.7)
Command:
```
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/converter.py --name=ssd_mobilenet_v2_coco --mo /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py
```
Result:
```
========== Converting ssd_mobilenet_v2_coco to IR (FP16)
Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --framework=tf --data_type=FP16 --output_dir=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16 --model_name=ssd_mobilenet_v2_coco --reverse_input_channels '--input_shape=[1,300,300,3]' --input=image_tensor --transformations_config=/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config --output=detection_classes,detection_scores,detection_boxes,num_detections --input_model=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
- Path for generated IR: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16
- IR output name: ssd_mobilenet_v2_coco
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: image_tensor
- Output layers: detection_classes,detection_scores,detection_boxes,num_detections
- Input shapes: [1,300,300,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP16
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config
- Use the config file: None
- Inference Engine found in: /opt/intel/openvino_2021.4.752/python/python3.8/openvino
Inference Engine version: 2021.4.2-3974-e2a469a3450-releases/2021/4
Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4
2023-01-31 17:19:42.205860: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/lib/intel64:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/external/tbb/lib:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../ngraph/lib
2023-01-31 17:19:42.206049: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16/ssd_mobilenet_v2_coco.xml
[ SUCCESS ] BIN file: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16/ssd_mobilenet_v2_coco.bin
[ SUCCESS ] Total execution time: 67.03 seconds.
[ SUCCESS ] Memory consumed: 670 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
========== Converting ssd_mobilenet_v2_coco to IR (FP32)
Conversion command: /usr/bin/python3 -- /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --framework=tf --data_type=FP32 --output_dir=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP32 --model_name=ssd_mobilenet_v2_coco --reverse_input_channels '--input_shape=[1,300,300,3]' --input=image_tensor --transformations_config=/opt/intel/openvino_2021/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config --output=detection_classes,detection_scores,detection_boxes,num_detections --input_model=/opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb
- Path for generated IR: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP32
- IR output name: ssd_mobilenet_v2_coco
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: image_tensor
- Output layers: detection_classes,detection_scores,detection_boxes,num_detections
- Input shapes: [1,300,300,3]
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: None
- Reverse input channels: True
TensorFlow specific parameters:
- Input model in text protobuf format: False
- Path to model dump for TensorBoard: None
- List of shared libraries with TensorFlow custom layers implementation: None
- Update the configuration file with input/output node names: None
- Use configuration file used to generate the model with Object Detection API: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/ssd_mobilenet_v2_coco_2018_03_29/pipeline.config
- Use the config file: None
- Inference Engine found in: /opt/intel/openvino_2021.4.752/python/python3.8/openvino
Inference Engine version: 2021.4.2-3974-e2a469a3450-releases/2021/4
Model Optimizer version: 2021.4.2-3974-e2a469a3450-releases/2021/4
2023-01-31 17:20:50.099183: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: :/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/lib/intel64:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../inference_engine/external/tbb/lib:/opt/intel/openvino_2021.4.752/deployment_tools/model_optimizer/mo/utils/../../../ngraph/lib
2023-01-31 17:20:50.099232: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/usr/local/lib/python3.8/dist-packages/tensorflow/python/autograph/impl/api.py:22: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.
[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP32/ssd_mobilenet_v2_coco.xml
[ SUCCESS ] BIN file: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP32/ssd_mobilenet_v2_coco.bin
[ SUCCESS ] Total execution time: 65.57 seconds.
[ SUCCESS ] Memory consumed: 671 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
```
* Copy label files (execute once)
```
sudo cp /opt/intel/openvino_2021/deployment_tools/open_model_zoo/data/dataset_classes/coco_91cl.txt /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16/
sudo cp /opt/intel/openvino_2021/deployment_tools/open_model_zoo/data/dataset_classes/coco_91cl.txt /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP32/
```
<br/>
* Before launch, check the parameter configuration in ros2_openvino_toolkit/sample/param/xxxx.yaml, make sure the paramter like model path, label path, inputs are right.
Command:
```
cd ~/my_ros2_ws/src/ros2_openvino_toolkit/sample/param
cat pipeline_object.yaml
```
Result:
```
Pipelines:
- name: object
inputs: [StandardCamera]
infers:
- name: ObjectDetection
model: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16/ssd_mobilenet_v2_coco.xml
engine: CPU
label: /opt/openvino_toolkit/models/public/ssd_mobilenet_v2_coco/FP16/coco_91cl.txt
batch: 1
confidence_threshold: 0.5
enable_roi_constraint: true # set enable_roi_constraint to false if you don't want to make the inferred ROI (region of interest) constrained into the camera frame
outputs: [ImageWindow, RosTopic]
connects:
- left: StandardCamera
right: [ObjectDetection]
- left: ObjectDetection
right: [ImageWindow]
- left: ObjectDetection
right: [RosTopic]
OpenvinoCommon:
```
* run object detection sample code input from StandardCamera.
Command:
```
cd /home/hank/my_ros2_ws/src/ros2_openvino_toolkit/sample/launch/
ros2 launch dynamic_vino_sample pipeline_object.launch.py
```
Result:

* Run RViz to check ROS
