###### tags: `AIA Edge` `Jstson Nano` `Kneron`
# AIA_EDGE_Week4
# 11/06_Jetson Nano
- [上課資料](https://drive.google.com/drive/u/1/folders/1QVj_rOFNZ5pSh8k5_jhhXWF35eVcCu06)
- id, password: jetsopn, 12345678
### Jetson Nano 安裝套件
```shell=
sudo apt-get update
sudo apt-get upgrade
# VNC 遠端連線不需要經過認證
gsettings set org.gnome.Vino prompt-enabled false
gsettings set org.gnome.Vino require-encryption false
# 查看 camara 連接
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
# 查看自己的 camera 支援什麼 樣的解析度與 FPS
sudo apt-get install v4l-utils
v4l2-ctl --list-formats-ext
```
#### 查看 CPU, GPU 使用狀況
```shell
sodo jtop
```
---
## 使用 pre-train model
### Hello World p.39
```shell=
# 下載專案
git clone --recursive https://github.com/dusty-nv/jetson-inference
cd jetson-inference
# 創建 build 資料夾
mkdir build && cd build
# 使用 Cmake 編譯所需的相依套件
cmake ../
# 啟動模型下載器
cd ~/jetson-inference/tools
./download-models.sh
# 安裝 PyTorch
cd ~/jetson-inference/build
./install-pytorch.sh
# 使用 make 編譯並安裝
make -j4
sudo make install
sudo ldconfig
# 檢視輸出檔案
cd ~/jetson- inference/build/aarch64
# 檢視範例程式在
cd ~/jetson- inference/build/aarch64/bin
# 檢視圖片預設路徑
cd ~/jetson-inference/data/images/
```
---
### Image recognition p.45
- 使用 GoogleNet-12
[靜態影像辨識程式](https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-console-2.md)
[即時影像辨識程式](https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md)
```shell=
# 靜態影像辨識
cd ~/jetson-inference/build/aarch64/bin
./imagenet-console.py --network=googlenet-12 images/orange_0.jpg orange_0_output.jpg
# 即時影像辨識
cd ~/jetson-inference/build/aarch64/bin
./imagenet-camera.py --network=googlenet-12 --width=1280 --height=720
```
### Object detection
- 使用 ssd-mobilenet-v2
[靜態物件辨識程式](https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-console-2.md)
[即時物件辨識程式](https://github.com/dusty-nv/jetson-inference/blob/master/docs/detectnet-camera-2.md)
```shell=
# 靜態物件辨識
cd ~/jetson-inference/build/aarch64/bin
./detectnet-console.py --network=ssd-mobilenet-v2 images/cat_2.jpg cat_2_output.jpg
# 即時物件辨識
cd ~/jetson-inference/build/aarch64/bin
./detectnet-camera.py --network=ssd-mobilenet-v2 --width=1280 --height=720
```
### Image segmentation
- fcn-resnet18-mhp-512x320
[靜態 Image segmentation](https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-console-2.md)
[即時 Image segmentation](https://github.com/dusty-nv/jetson-inference/blob/master/docs/segnet-camera-2.md)
```shell=
# 靜態 Image segmentation
cd ~/jetson-inference/build/aarch64/bin
./segnet-console.py --network=fcn-resnet18-mhp-512x320 images/humans_9.jpg humans_9_output.jpg
# 即時 Image segmentation
cd ~/jetson-inference/build/aarch64/bin
./segnet-camera.py --network=fcn-resnet18-mhp-512x320 --width=1280 --height=720
```
### Pose estimation p.50
```shell=
# 下載專案並讀入
git clone https://gitlab.aiacademy.tw/jacky.chang/pose_estimation.git
cd pose_estimation
# 安裝 torch2trt,這個套件可以將 PyTorch 轉成 TensorRT
git clone https://github.com/NVIDIA-AI-IOT/torch2trt.git
cd torch2trt && sudo python3 setup.py install
# 安裝 trt_pose
git clone https://github.com/NVIDIA-AI-IOT/trt_pose.git
cd trt_pose && sudo python3 setup.py install
# 安裝相依套件
sudo apt-get install python3-matplotlib
sudo -H pip3 install tqdm pycocotools
cd pose_estimation/
# 使用影片實現 pose estimation
python3 trt_pose_app.py --nodrop test.mp4
# 使用 camera 實現 pose estimation
python3 trt_pose_app.py --camera -1
```
---
## 使用自定義模型
![](https://i.imgur.com/gm9263B.jpg)
- TensorFlow -> ONNX -> TensorRT
- 先執行 wolkflow.ipynb 將 .h5 檔轉成 .onnx 檔,將 keras 的 h5 檔轉成 ONNX 檔後需手動將 Batch size 的 N 改為 1
- 再執行 onnx2trt.py 將 .onnx 檔轉成 .trt 檔
- 優點:消耗更少的內存,**推論速度比 TF-TRT 快 **
- 缺點:若遇到某些圖層不支持 TRT,必須為這些圖層使用插件(plugin)或自定義代碼來實現
```shell=
# 下載專案
git clone https://gitlab.aiacademy.tw/jacky.chang/simple_examples.git
cd simple_examples/
# 將 .onnx 檔轉成 .trt 檔
python3 onnx2trt.py
# 實作
python3 inference.py
```
---
### 自定義模型範例一: 使用 ***[Cifar-10](https://github.com/ming-hu427/AIoT/blob/aia_edge/AIA_EDGE/1106/code/simple_examples/cifar10/cifar10.ipynb)*** 資料集(p.79)
- 使用 vgg16 pretrained model,權重使用到第三個 block,並連接自定義的網路架構。
#### 在 Hub 執行
```shell=
# 下載專案
cd /myclass/2020-edge-ai/1106_JetsonNano/
git clone https://gitlab.aiacademy.tw/jacky.chang/simple_examples.git
```
- 運行 ***simple_examples/cifar10/cifar10.ipynb*** 將 *.h5 檔轉成 *.onnx 檔
#### 在 Jetson nano 運行
```
# 將 .onnx 檔轉成 .trt 檔
cd simple_examples/
python3 build_engine.py -o vgg16_32.onnx -t vgg16_32.trt -m 16 -b 1
# 辨識即時影像
python3 inference.py --csi -1 -m retinaface-640
```
---
### 自定義模型範例二: 臉部偵測 Face recognition (p.86)
- 使用 [WIDER FACE: A Face Detection Benchmark](http://shuoyang1213.me/WIDERFACE/) 資料集
- 使用 [RetinaFace](https://github.com/biubug6/Pytorch_Retinaface) 模型
#### 在 Hub 執行
```shell=
# 下載專案
cd /myclass/2020-edge-ai/1106_JetsonNano/
git clone https://gitlab.aiacademy.tw/jacky.chang/face_recognition.git
```
- 運行 ***face_recognition/retinaface/train.ipynb*** 將 *.h5 檔轉成 *.onnx 檔
- 在 face_recognition/retinaface/utils/config.py 修改 'image_size'
#### 在 Jetson nano 運行
```shell=
# 下載專案
git clone https://gitlab.aiacademy.tw/jacky.chang/face_recognition.git
# 將 .onnx 檔轉成 .trt 檔
cd face_recognition/
python3 build_engine.py -o retinaface/retinaface-320.onnx -t retinaface/retinaface-320.trt
# 查看執行指令
python3 trt_face_detection.py --help
# 辨識圖片
python3 trt_face_detection.py --image test/MaYingJeou_TsaiIngWen_SoongChuYu-test.png -m retinaface-320
# 辨識影片
python3 trt_face_detection.py --video test/test.mp4 --video_looping -m retinaface-320
# 辨識即時影像
python3 trt_face_detection.py --csi -1 -m retinaface-320
```
---
### 自定義模型範例三: 身分辨識 Face recognition (p.98 2:14:50)
- 使用 [Model Zoo](https://github.com/deepinsight/insightface/wiki/Model-Zoo) 的 [MobileFaceNet,ArcFace](https://www.dropbox.com/s/akxeqp99jvsd6z7/model-MobileFaceNet-arcface-ms1m-refine-v1.zip?dl=0) 模型
- 參考 [issues #13460](https://github.com/apache/incubator-mxnet/pull/13460/commits/f1a6df82a40d1d9e8be6f7c3f9f4dcfe75948bd6) 修改檔案才能成功轉成 ONNX 檔
#### 在 Hub 執行
- 運行 ***face_recognition/mobilefacenet//mobilefacenet.ipynb***
- 將 *.h5 檔轉成 *.onnx 檔
- 若有身分辨識對象,將照片新增到 face_recognition/database/ 資料夾
- 每一個人的照片都放在一個獨立的子資料夾
- 運行 ***face_recognition/create_db.ipynb***
- 將 face_recognition/database/ 資料夾內容創建身分辨識資料檔(***face_recognition/db.pikle***) 提供身分辨識
- 運行 ***face_recognition/face_recognition.ipynb***
- 對 face_recognition/db.pikle 資料檔進行身分辨識
#### 在 Jetson nano 運行
```shell=
# 下載 mobilefacenet.onnx 並轉成 mobilefacenet.trt 檔
cd face_recognition/
python3 build_engine.py -o mobilefacenet/mobilefacenet.onnx -t mobilefacenet/mobilefacenet.trt
# 查看執行指令
python3 trt_face_detection.py --help
# 辨識圖片
python3 trt_face_recognition.py --image test/MaYingJeou_TsaiIngWen_SoongChuYu-test.png -m retinaface-320
# 辨識影片
python3 trt_face_recognition.py --video test/test.mp4 --video_looping -m retinaface-320
# 辨識即時影像
python3 trt_face_recognition.py --csi -1 -m retinaface-320
```
---
### 參考網站
- [jetson-inference](https://github.com/dusty-nv/jetson-inference)
- [JetPack SDK](https://developer.nvidia.com/embedded/jetpack)
- [初次使用 Jetson Nano](https://yanwei-liu.medium.com/nvidia-jetson-nano%E5%AD%B8%E7%BF%92%E7%AD%86%E8%A8%98-%E4%B8%80-%E5%88%9D%E6%AC%A1%E4%BD%BF%E7%94%A8-4dce57a0b2b1)
- [解決 Jetson Nano 使用 EDIMAX EW-7811Un Wifi 網卡的支援問題](https://github.com/pvaret/rtl8192cu-fixes?)
```shell=
echo options rtl8xxxu ht40_2g=1 dma_aggregation=1 | sudo tee /etc/modprobe.d/rtl8xxxu.conf
sudo reboot
sudo apt-get update
sudo apt-get install git linux-headers-generic build-essential dkms
git clone https://github.com/pvaret/rtl8192cu-fixes.git
sudo dkms add ./rtl8192cu-fixes
sudo dkms install 8192cu/1.11
sudo depmod -a
sudo cp ./rtl8192cu-fixes/blacklist-native-rtl8192.conf /etc/modprobe.d/
sudo poweroff
```
- **[Jetson Zoo](https://elinux.org/Jetson_Zoo)** 整合 Jetson 提供的安裝環境與模型
- **[Official Python Examples](https://docs.nvidia.com/deeplearning/tensorrt/sample-support-guide/)** 官方入門範例
- **[onnx-tensorrt](https://github.com/onnx/onnx-tensorrt)** 提供將 onnx 檔轉為 rt 檔的程式
- **[ONNX Model Zoo](https://github.com/onnx/models)** 整合 ONNX 提供的模型
- **[tensorrt_demos](https://github.com/jkjung-avt/tensorrt_demos)** 使用 YOLOv4 開發 TensorRT 專案
- [臉部辨識說明](https://medium.com/@ageitgey/build-a-hardware-based-face-recognition-system-for-150-with-the-nvidia-jetson-nano-and-python-a25cb8c891fd)
- [AIA提供的 Image 檔](https://drive.google.com/file/d/1Dqv6Ic7o5ZhbqFTXp4f5xSxSM89hiDrM/view?usp=sharing)
- [人臉偵測 Retinaface by TF2](https://github.com/bubbliiiing/retinaface-tf2)
- [Jetson Community Projects](https://developer.nvidia.com/embedded/community/jetson-projects) 使用 Jetsn Nano 開發的主題
- [TnsorRT 官網](https://github.com/NVIDIA/TensorRT)
- [NVIDIA AI IOT](https://github.com/NVIDIA-AI-IOT) NVIDIA 提供的官方模型
- [tensorrtx](https://github.com/wang-xinyu/tensorrtx) 直接用 tensorRT 開發模型
- [Jetson Nano Forum](https://forums.developer.nvidia.com/c/agx-autonomous-machines/jetson-embedded-systems/jetson-nano/76) NVIDIA 開發者提問區
- [安裝 nomachine 解決遠端桌面操作lag問題](https://www.nomachine.com/download/download&id=115&s=ARM)
---
---
# 11/07_Kneron
### 使用 Kneron's Pre-train model inference
- [上課資料](https://drive.google.com/drive/u/1/folders/1Cd1-8BxTHim75c_fZmW-irIxnLil8o8c)
```shell=
# 啟動py383虛擬環境
conda activate py383
# host_lib (進行推論)
cd /Desktop/host_lib__v0.8/app_binaries
# 韌體更新 更新完成後重新插拔dongle
cd /Desktop/host_lib__v0.8/python
python3 main.py -t update_app
# 查詢任務指令
python main.py -h
# 依據不同的專案輸入指令: 執行yolo辨識專案
python main.py -t cam_isi_async_parallel_yolo
# 執行dme模式的ssd_face-detection專案
python main.py -t cam_dme_serial_ssd_fd
```
### 使用自定義模型 Customed model inference
```shell=
# 取得 MobileNetV2.h5
# 建立共享資料夾
mkdir ~/docker_share
# 啟動 docker 並掛載共享資料夾 /docker_share
sudo docker run -it --rm -v ~/docker_share/:/data1 kneron/toolchain:linux_command_toolchain_520
# 確認目前在 docker 環境
cp -r /workspace/examples/* /data1/
cd /data1/ && mkdir my_proj && cd my_proj
# MobileNetV2 by keras2onnx
python /workspace/libs/ONNX_Convertor/keras-onnx/generate_onnx.py -o /data1/my_proj/k2o_mobilenetv2.onnx ./MobileNetV2.h5 -O --duplicate-shared-weight
# MobileNetV2 by onnx2onnx
python /workspace/libs/ONNX_Convertor/optimizer_scripts/onnx2onnx.py /data1/my_proj/k2o_mobilenetv2.onnx -o /data1/my_proj/o2o_mobilenetv2.onnx -t --add-bn-on-skip
# 準備編譯時要用的影像
cd /data1 && mkdir images
cp /data1/yolo/imgs/* /data1/images
# 安裝 nano
apt-get install -y nano
# 調整 batch_compile_input_params.json 文件
nano docker_share/batch_compile_input_params.json
# 模型編譯
cd /workspace/scripts && ./fpAnalyserBatchCompile.sh.x
# 檢驗成果
cd /workspace/scripts && ./gen_ota_binary_for_linux -model /data1/batch_compile/compiler/fw_info.bin /data1/batch_compile/compiler/all_models.bin /data1/batch_compile/compiler/model_out.bin
cd /data1/batch_compile/compiler
# 退出docker環境
exit
# 將 all_models.bin 和 fw_info.bin 複製到 /host_lib__v0.8/test_Desktop/host_lib__v0.8/test_images/dme_mobilenet/
rm ~/Desktop/host_lib__v0.8/test_images/dme_mobilenet/*
cp ~/docker_share/batch_compile/compiler/all_models.bin ~/Desktop/host_lib__v0.8/test_images/dme_mobilenet/
cp ~/docker_share/batch_compile/compiler/fw_info.bin ~/Desktop/host_lib__v0.8/test_images/dme_mobilenet/
# 使用 dme_keras.py 進行模型推論
conda activate py383
cd ~/Desktop/host_lib__v0.8/python
python main.py -t dme_keras
```
### 調整 batch_compile_input_params.json 文件
```shell
{
"input_image_folder": ["/data1/images"],
"img_channel": ["RGB"],
"model_input_width": [224],
"model_input_height": [224],
"img_preprocess_method": ["tensorflow"],
"input_onnx_file": ["/data1/my_proj/o2o_mobilenetv2.onnx"],
"keep_aspect_ratio": ["True"],
"command_addr": "0x30000000",
"weight_addr": "0x40000000",
"sram_addr": "0x50000000",
"dram_addr": "0x60000000",
"whether_encryption": "No",
"encryption_key": "0x12345678",
"model_id_list": [1000],
"model_version_list": [1],
"add_norm_list": ["True"],
"dedicated_output_buffer": "False"
}
```
### 在樹莓派推論
- 不需要裝 Miniconda, 確定 ptyhon 版本在3.7, CMake 版本在4.3以上
```shell=
git clone https://gitlab.aiacademy.tw/junew/AIA_Kneron.git
cd AIA_Kneron
bash kneron_env_auto.sh
```
---
```shell=
# 安裝 cv2
# pip3 install opencv-python
# pip3 install pkgs/kdp_host_api-1.1.2_Raspbian_-py3-none-any.whl
cd env_pkgs/auto_sh
bash cmake_auto.sh
bash opencv_auto.sh
bash pyenv_auto.sh
bash usb_auto.sh
cd ..
unzip host_lib__v0.8.zip
cd host_lib__v0.8/
sudo apt install libusb-1.0-0-dev
mkdir build && cd build
cmake ..
cmake -DBUILD_OPENCV_EX=on ..
make -j4
pip3 install pkgs/kdp_host_api-1.1.4_Raspbian_-py3-none-any.whl
cd python
python3 main.py -t update_app
# 依據不同的專案輸入指令: 執行yolo辨識專案
python3 main.py -t cam_isi_async_parallel_yolo
python3 main.py -t cam_dme_serial_ssd_fd
```
---
### 參考網站
- [耐能安裝](http://www.kneron.com/tw/support/developers/)
- [Kneron Documentations](http://doc.kneron.com/docs/)
- [kneron GitHub](https://github.com/kneron/host_lib)
- [Kneron Docker](https://hub.docker.com/u/kneron)
- [耐能營隊 VM 與 Dongle 使用教學](https://hackmd.io/@Robert/BkLfB8jJv)
- [Kneron Documentations](http://doc.kneron.com/pythondocs/#)
- [投影片指令](https://hackmd.io/@junewang/耐能lib_0_8)
---
- [VMware Fusion](https://www.vmware.com/tw/products/fusion/fusion-evaluation.html)
- [mini conda](https://docs.conda.io/en/latest/miniconda.html)
- [調整usb權限](https://blog.csdn.net/qq_23670601/article/details/88756372)
- [安裝cmake](https://www.claudiokuenzler.com/blog/611/installing-cmake-3.4.1-ubuntu-14.04-trusty-using-alternatives)
- [安裝opencv](https://towardsdatascience.com/installing-opencv-3-4-3-on-raspberry-pi-3-model-b-e9af08a9f1d9)
- [安裝與使用pyenv(可選)](https://learningsky.io/python-development-on-ubuntu-with-pyenv-virtualenv/)