# Nvidia Nano + trt_pose 姿態識別
資料來源: https://github.com/NVIDIA-AI-IOT/trt_pose
## 前置作業
### Nano 安裝設定
* 跟PI的用法很像, 先去抓image下來, 然後寫入到sd卡再開機, 也可以用serial console控制
* 作業系統 image: https://developer.nvidia.com/embedded/dlc/jetson-nano-dev-kit-sd-card-image
* Win32 image writer (寫入映像檔到SD卡): https://sourceforge.net/projects/win32diskimager/
### 設定swap
* 設定至少4G的空間作為swap, 不然跑trt_pose會有記憶體不足的問題 QQ
* (1)停止swap, 第一次設定會有錯誤訊息, 因為不存在; (2)新增一個空白4G檔案; (3)該檔案設定為swap; (4)啟用; (5)設定開機自動執行
```
sudo swapoff /swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=4096
sudo mkswap /swapfile
sudo swapon /swapfile
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
```
## 相依性套件
* 官網說: We assume you have already installed PyTorch, torchvision, and TensorRT
- 其實好像不只這些東西(!?)
- 以下是我裝的東西, 裝完之後就可以跑了
1. 安裝pytorch
```
mkdir pytorch
wget https://nvidia.box.com/shared/static/j2dn48btaxosqp0zremqqm8pjelriyvs.whl -O torch-1.1.0-cp36-cp36m-linux_aarch64.whl
sudo pip3 install torch-1.1.0-cp36-cp36m-linux_aarch64.whl
```
2. 安裝torchvision: https://github.com/pytorch/vision
```
sudo apt-get install libjpeg-dev zlib1g-dev
git clone -b v0.3.0 https://github.com/pytorch/vision torchvision
cd torchvision
sudo python3 setup.py install
```
如果遇到錯誤訊息:
```
RuntimeError: module compiled against API version 0xc but this version of numpy is 0xb
sol: pip install numpy --upgrade
```
3. TensorRT (好像內建了?)
4. Jupyter Notebook
```
pip3 install --upgrade pip
pip3 install jupyter
執行方式: 終端機直接輸入 jupyter notebook
```
5. cpython
```
sudo pip3 install cpython
```
6. torch2trt: https://github.com/NVIDIA-AI-IOT/torch2trt
看網站說明, 有兩種裝法, 我是選有plugins的那個
```
# with plugins
sudo apt-get install libprotobuf* protobuf-compiler ninja-build
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
sudo python3 setup.py install --plugins
```
```
# without pulgins
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt
sudo python3 setup.py install
```
7. jetcam: https://github.com/NVIDIA-AI-IOT/jetcam
JetCam is an easy to use Python camera interface for NVIDIA Jetson.
```
git clone https://github.com/NVIDIA-AI-IOT/jetcam
cd jetcam
sudo python3 setup.py install
```
## 安裝&執行 trt_pose
```
sudo pip3 install tqdm
sudo pip3 install cython
sudo pip3 install pycocotools
sudo apt-get install python3-matplotlib
git clone https://github.com/NVIDIA-AI-IOT/trt_pose
cd trt_pose
sudo python3 setup.py install
```
下載模型(兩者擇一就好, 之後要設定在python裡面, 預設是第一個), 要放在 tasks/human_pose 資料夾裡面:
* resnet18_baseline_att_224x224_A:
https://drive.google.com/open?id=1XYDdCUdiF2xxx4rznmLb62SdOUZuoNbd
* densenet121_baseline_att_256x256_B:
https://drive.google.com/open?id=13FkJkx7evQ1WwP54UmdiDXWyFMY1OxDU
執行 jupyter notebook, 進入 tasks/human_pose 資料夾, 點選 live_demo.ipynb
如果是使用CSI camera, 要記得修改python code!!
如果是使用CSI camera, 要記得修改python code!!
如果是使用CSI camera, 要記得修改python code!! (很重要, 說三次)
預設是USB camera, 可以看jupyter的 in [11] 的地方, 把原本USB的地方修改為CSI
然後按 jupyter 上方的快轉鍵(?): restart the kernel, then re-run the whole notebook (with dialog)
之後可以去泡個茶, 上個廁所, 要等一段時間才會看到結果 XD
結果會直接顯示在jupyter的網頁上, 預設的畫面有點小, 不過還算順暢