# Setup Neuron PI for OpenVINO iGPU and HDDL/VPU inference
[TOC]
## GPU Installation Guide
### Detail can be found from online document
https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux.html#additional-GPU-steps
### Steps
```
cd /opt/intel/openvino_2021/install_dependencies/
sudo -E su
./install_NEO_OCL_driver.sh
```
### Terminal logs
```
[setupvars.sh] OpenVINO environment initialized
ros@ros-LEC-ALAI:~$ cd /opt/intel/openvino_2021/install_dependencies/
ros@ros-LEC-ALAI:/opt/intel/openvino_2021/install_dependencies$ sudo -E su
[sudo] password for ros:
root@ros-LEC-ALAI:/opt/intel/openvino_2021/install_dependencies# ./install_NEO_OCL_driver.sh
Intel® Graphics Compute Runtime for OpenCL™ Driver installer
Checking current driver version...
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
E: No packages found
Checking processor generation...
Starting installation...
This script will download and install Intel(R) Graphics Compute Runtime 19.41.14441,
that was used to validate this OpenVINO™ package.
In case if you already have the driver - script will try to remove it.
Want to proceed? (y/n): y
...
Adding ros to the video group...
Adding ros to the render group...
usermod: group 'render' does not exist
WARNING: unable to add ros to the render group
Installation completed successfully.
Next steps:
Add OpenCL users to the video and render group: 'sudo usermod -a -G video,render USERNAME'
e.g. if the user running OpenCL host applications is foo, run: sudo usermod -a -G video,render foo
Current user has been already added to the video and render group
If you use 8th Generation Intel® Core™ processor, add:
i915.alpha_support=1
to the 4.14 kernel command line, in order to enable OpenCL functionality for this platform.
root@ros-LEC-ALAI:/opt/intel/openvino_2021/install_dependencies#
```
## HDDL Installation Guide
### Detail can be found from onlie docuement
https://docs.openvinotoolkit.org/latest/openvino_docs_install_guides_installing_openvino_linux_ivad_vpu.html
### Steps
```
${HDDL_INSTALL_DIR}/install_IVAD_VPU_dependencies.sh
```
### Terminal logs
```
[setupvars.sh] OpenVINO environment initialized
ros@ros-LEC-ALAI:~$ ${HDDL_INSTALL_DIR}/install_IVAD_VPU_dependencies.sh
Ubuntu18.04
[sudo] password for ros:
Reading package lists... Done
Building dependency tree
...
sudo depmod
filename: /lib/modules/5.4.0-66-generic/kernel/drivers/myd/myd_vsc.ko
license: GPL
srcversion: 43AE0E7C58AE1F604AA1D61
alias: usb:v03E7pF63Bd*dc*dsc*dp*ic*isc*ip*in*
depends:
retpoline: Y
name: myd_vsc
vermagic: 5.4.0-66-generic SMP mod_unload
mkdir -p /etc/modules-load.d/
/etc/modules-load.d/myd_vsc.conf is created for auto-load at boot time
'/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/../97-myriad-usbboot.rules' -> '/etc/udev/rules.d/97-myriad-usbboot.rules'
'/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/etc/udev/rules.d/97-myriad-usbboot.rules' -> '/etc/udev/rules.d/97-myriad-usbboot.rules'
'/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/etc/udev/rules.d/99-hddl-ion.rules' -> '/etc/udev/rules.d/99-hddl-ion.rules'
'/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/etc/udev/rules.d/99-myriad-vsc.rules' -> '/etc/udev/rules.d/99-myriad-vsc.rules'
=======================================
Install HDDL dependencies sucessful
Please reboot
```
## Download and convert mobilenet-v2-1.0-224 from Tensorflow to IR files
```
python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/downloader.py --name mobilnet-v2-1.0-224
python3 /opt/intel/openvino_2021/deployment_tools/tools/model_downloader/converter.py --name mobilnet-v2-1.0-224
```
## Run Inference on GPU
```
python3 /opt/intel/openvino_2021/deployment_tools/tools/benchmark_tool/benchmark_app.py -m public/mobilenet-v2-1.0-224/FP16/mobilenet-v2-1.0-224.xml -t 1 -d GPU
```
### Terminal logs
```
```
## Run Inference on HDDL
```
python3 /opt/intel/openvino_2021/deployment_tools/tools/benchmark_tool/benchmark_app.py -m public/mobilenet-v2-1.0-224/FP16/mobilenet-v2-1.0-224.xml -t 1 -d HDDL
```
### Terminal logs
```
ros@ros-LEC-ALAI:~$ python3 /opt/intel/openvino_2021/deployment_tools/tools/benchmark_tool/benchmark_app.py -m public/mobilenet-v2-1.0-224/FP16/mobilenet-v2-1.0-224.xml -t 1 -d HDDL
[Step 1/11] Parsing and validating input arguments
/opt/intel/openvino_2021/python/python3.6/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(" -nstreams default value is determined automatically for a device. "
[ WARNING ] -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
API version............. 2.1.2021.2.0-1877-176bdf51370-releases/2021/2
[ INFO ] Device info
HDDL
HDDLPlugin.............. version 2.1
Build................... 2021.2.0-1877-176bdf51370-releases/2021/2
[Step 3/11] Setting device configuration
[Step 4/11] Reading network files
[ INFO ] Read network took 103.52 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[10:08:23.1284][3524]I[main.cpp:246] ## HDDL_INSTALL_DIR: /opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl
[10:08:23.1290][3524]I[main.cpp:248] Config file '/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/config/hddl_service.config' has been loaded
[10:08:23.1302][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1305][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1307][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service_failed.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1309][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1342][3524]I[AutobootStarter.cpp:156] Info: No running autoboot process. Start autoboot daemon...
[10:08:23.1616][3526]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_alive.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1620][3526]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_ready.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1622][3526]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_autoboot_start_exit.mutex owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1624][3526]I[FileHelper.cpp:272] Set file:/tmp/hddl_autoboot_device.map owner: user-'no_change', group-'users', mode-'0660'
[10:08:23.1630][3526]I[AutoBoot.cpp:308] [Firmware Config] deviceName=default deviceNum=0 firmwarePath=/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[10:08:24.5007][3536]I[AutoBoot.cpp:197] Start boot device 1.8-ma2480
[10:08:24.7864][3536]I[AutoBoot.cpp:199] Device 1.8-ma2480 boot success, firmware=/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib/mvnc/usb-ma2x8x.mvcmd
[10:08:44.7894][3524]I[AutobootStarter.cpp:85] Info: Autoboot is running.
[10:08:44.8340][3524]W[ConfigParser.cpp:269] Warning: Cannot find key, path=scheduler_config.max_graph_per_device subclass=0, use default value: 1.
[10:08:44.8343][3524]W[ConfigParser.cpp:292] Warning: Cannot find key, path=scheduler_config.use_sgad_by_default subclass=0, use default value: false.
[10:08:44.8344][3524]I[DeviceSchedulerFactory.cpp:56] Info: ## DeviceSchedulerFacotry ## Created Squeeze Device-Scheduler2.
[10:08:44.8354][3524]I[DeviceManager.cpp:551] ## SqueezeScheduler created ##
[10:08:44.8355][3524]I[DeviceManager.cpp:649] times 0: try to create worker on device(2.6)
[10:08:46.8418][3524]I[DeviceManager.cpp:670] [SUCCESS] times 0: create worker on device(2.6)
[10:08:46.8426][3524]I[DeviceManager.cpp:719] worker(Wt2.6) created on device(2.6), type(0)
[10:08:46.8427][3524]I[DeviceManager.cpp:145] DEVICE FOUND : 1
[10:08:46.8427][3524]I[DeviceManager.cpp:146] DEVICE OPENED : 1
[10:08:46.8430][3524]I[DeviceManagerCreator.cpp:81] New device manager(DeviceManager0) created with subclass(0), deviceCount(1)
[10:08:46.8787][3524]I[TaskSchedulerFactory.cpp:45] Info: ## TaskSchedulerFactory ## Created Polling Task-Scheduler.
[10:08:46.8794][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_snapshot.sock owner: user-'no_change', group-'users', mode-'0660'
[10:08:46.8798][3524]I[FileHelper.cpp:272] Set file:/var/tmp/hddl_service.sock owner: user-'no_change', group-'users', mode-'0660'
[10:08:46.8800][3524]I[MessageDispatcher.cpp:87] Message Dispatcher initialization finished
[10:08:46.8800][3524]I[main.cpp:106] SERVICE IS READY ...
[10:08:46.9210][3561]I[ClientManager.cpp:159] client(id:1) registered: clientName=HDDLPlugin socket=2
[10:08:50.0209][3562]I[GraphManager.cpp:491] Load graph success, graphId=1 graphName=mobilenet-v2-1.0-224
[ INFO ] Load network took 26994.88 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'input' precision U8, dimensions (NCHW): 1 3 224 224
/opt/intel/openvino_2021/python/python3.6/openvino/tools/benchmark/utils/inputs_filling.py:71: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn("No input files were given: all inputs will be filled with random values!")
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Infer Request 0 filling
[ INFO ] Fill input 'input' with random values (image is expected)
[ INFO ] Infer Request 1 filling
[ INFO ] Fill input 'input' with random values (image is expected)
[ INFO ] Infer Request 2 filling
[ INFO ] Fill input 'input' with random values (image is expected)
[ INFO ] Infer Request 3 filling
[ INFO ] Fill input 'input' with random values (image is expected)
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests, limits: 1000 ms duration)
[ INFO ] First inference took 81.11 ms
[Step 11/11] Dumping statistics report
Count: 88 iterations
Duration: 1089.53 ms
Latency: 49.08 ms
Throughput: 80.77 FPS
[10:08:51.2386][3561]I[ClientManager.cpp:189] client(id:1) unregistered: clientName=HDDLPlugin socket=2
[10:08:51.2511][3562]I[GraphManager.cpp:539] graph(1) destroyed
ros@ros-LEC-ALAI:~$
```
## FAQ
### benchmark_app 停住好久
1. Please make sure user "ros" in users and video groups. Command "groups" will list groups the user run this command are in.
3. GPU inference takes time to load/compile OpenCL kernels. Can create cl_cache folder. Compiled OpenCL kernels will be cached in cl_cache folder. Next GPU inference run will have much shorter loading/compiling time.