owned this note
owned this note
Published
Linked with GitHub
# Nvidia Jetson Nano 使用心得
## Nano 安裝設定
* 跟PI的用法很像, 先去抓image下來, 然後寫入到sd卡再開機, 也可以用serial console控制
* 作業系統 image: https://developer.nvidia.com/embedded/dlc/jetson-nano-dev-kit-sd-card-image
* Win32 image writer (寫入映像檔到SD卡): https://sourceforge.net/projects/win32diskimager/
* 開機時, 就跟安裝ubuntu一樣, 設定使用者名稱/密碼```等
* 板子底部有寫TTL的腳位 (如果要使用TTL, 單純用終端機控制的話, 可以用這個)
* Jetson Nano J44 Pin 2 (TXD) → Cable RXD (White Wire)
* Jetson Nano J44 Pin 3 (RXD) → Cable TXD (Green Wire)
* Jetson Nano J44 Pin 6 (GND) → Cable GND (Black Wire)
* Connection speed is 115200, with 8 bits, no parity, and 1 stop bit (115200 8N1).
* ref: https://www.jetsonhacks.com/2019/04/19/jetson-nano-serial-console/
* 裝完之後記得先系統更新一下
```
sudo apt-get update
sudo apt-get full-upgrade
```
## 設定swap
* 設定4G的空間作為swap, 之後跑程式會比較順(?)
* (1)停止swap, 第一次設定會有錯誤訊息, 因為不存在; (2)新增一個空白4G檔案; (3)該檔案設定為swap; (4)啟用; (5)設定開機自動執行
```
sudo swapoff /swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=4096
sudo mkswap /swapfile
sudo swapon /swapfile
sudo cp /etc/fstab /etc/fstab.bak
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
```
## 檢查CUDA
* Jetson-nano已經內建CUDA10.0, 修改環境變數確保可以使用CUDA
```
nano ~/.bashrc
```
* 新增下列幾行進去, 然後存檔離開
```
export CUDA_HOME=/usr/local/cuda-10.0/
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:$LD_LIBRARY_PATH
export PATH=${CUDA_HOME}bin:$PATH
```
* 再載入環境變數, 然後跑 nvcc -V 試試
```
source ~/.bashrc
wufish@Jetson:~$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sun_Sep_30_21:09:22_CDT_2018
Cuda compilation tools, release 10.0, V10.0.166
```
## 檢查 OpenCV
* 聽說Jetson-nano有內建OpenCV3.3, 可以用下列指令檢查
```
pkg-config opencv --modversion
```
* 只是我之前找不到OpenCV, 後來又另外安裝
```
# echo "** Remove OpenCV3.3 first"
sudo sudo apt-get purge *libopencv*
# echo "** Install requirement"
sudo apt-get update
sudo apt-get install -y build-essential cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install -y libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install -y python2.7-dev python3.6-dev python-dev python-numpy python3-numpy
sudo apt-get install -y libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
sudo apt-get install -y libv4l-dev v4l-utils qv4l2 v4l2ucp
sudo apt-get install -y curl
sudo apt-get update
# echo "** Download opencv-4.0.0"
cd $folder
curl -L https://github.com/opencv/opencv/archive/4.0.0.zip -o opencv-4.0.0.zip
curl -L https://github.com/opencv/opencv_contrib/archive/4.0.0.zip -o opencv_contrib-4.0.0.zip
unzip opencv-4.0.0.zip
unzip opencv_contrib-4.0.0.zip
cd opencv-4.0.0/
# echo "** Building..."
mkdir release
cd release/
cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN="5.3" -D CUDA_ARCH_PTX="" -D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib-4.0.0/modules -D WITH_GSTREAMER=ON -D WITH_LIBV4L=ON -D BUILD_opencv_python2=ON -D BUILD_opencv_python3=ON -D BUILD_TESTS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_EXAMPLES=OFF -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local ..
make -j3
sudo make install
```
* 我後來是用上面的script安裝opencv, 目前看起來應該是可以用(!?)
* 來源: https://github.com/AastaNV/JEP/blob/master/script/install_opencv4.0.0_Nano.sh
## 檢查cuDNN
* nano有內建cuDNN, 有範例程式可以測試
```
cd /usr/src/cudnn_samples_v7/mnistCUDNN
sudo make
sudo chmod a+x mnistCUDNN
./mnistCUDNN
(執行結果懶得貼, 最後有 "Test passed!" 這句話)
```
## 安裝 TensorFlow GPU
* 安裝python相關套件
```
sudo apt-get install python3-pip python3-dev
python3 -m pip install --upgrade pip
sudo apt-get install python3-scipy
sudo apt-get install python3-pandas
sudo apt-get install python3-sklearn
```
* 安裝Tensorflow
* 可以去這個網址抓其他版本下來 https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/
```
sudo apt-get install python3-pip libhdf5-serial-dev hdf5-tools
wget https://developer.download.nvidia.com/compute/redist/jp/v42/tensorflow-gpu/tensorflow_gpu-1.13.1+nv19.3-cp36-cp36m-linux_aarch64.whl
pip3 instal tensorflow_gpu-1.13.1+nv19.3-cp36-cp36m-linux_aarch64.whl
```
* 安裝Keras
```
sudo pip3 install keras
```
* 測試Tensorflow與Keras, 可以參考這個網頁: https://blog.csdn.net/beckhans/article/details/89146881
* Keras: 終端機打 python3, 進去之後再打 import keras, 如果看到using TensorFlow backend就表示成功
## 安裝 jetson-inference
* 安裝相關工具, 還有source code下來編譯
```
sudo apt-get install git cmake
git clone https://github.com/dusty-nv/jetson-inference
cd jetson-inference
git submodule update --init
mkdir build
cd build
cmake ../
make
sudo make install
```
* 如果沒發生錯誤訊息, 應該就裝好了, 然後就可以開始測試 (資料夾路徑: jetson-inference/build/aarch64/bin)
```
cd build/aarch64/bin
```
* 執行內建的物件辨識範例 (記得接上CSI camera模組, 拿Raspberry PI的相機模組就可以了), 這個範例感覺只偵測人員而已
```
./detectnet-camera.py
```
## 安裝YOLO
* 除了官方的辨識工具, 也可以裝一下YOLO來辨識物件. 不過Nano應該只適合跑tiny-YOLO. XD
```
git clone https://github.com/pjreddie/darknet.git
cd darknet
nano Makefile
# Makefile的這三個參數改成 =1
GPU=1
CUDNN=1
OPENCV=1
make -j4
```
* make完之後, 可以抓weight檔下來
```
wget https://pjreddie.com/media/files/yolov3-tiny.weights
```
* 偵測內建的範例圖片
```
./darknet detect cfg/yolov3-tiny.cfg yolov3-tiny.weights data/dog.jpg
```
* 官網有提到detect這個參數: https://pjreddie.com/darknet/yolo/
```
這兩個是一樣的意思
./darknet detector test cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights data/dog.jpg
./darknet detect cfg/yolov3-tiny.cfg yolov3-tiny.weights data/dog.jpg
可以想成detect = detector test cfg/coco.data
```
* 偵測攝影機的畫面
```
官網範例
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights
之前測試可行的指令
./darknet detector demo cfg/coco.data cfg/yolov3-tiny.cfg yolov3-tiny.weights "'nvarguscamerasrc ! video/x-raw(memory:NVMM), width=1920, height=1080, format=(string)NV12, framerate=(fraction)30/1 ! nvtee ! nvvidconv flip-method=2 ! video/x-raw, width=(int)1280, height=(int)720, format=(string)BGRx ! videoconvert ! appsink'"
```
## YOLO + Python
* source: https://github.com/madhawav/YOLO3-4-Py
* 建議從source code開始安裝
* 先去編輯環境變數, 然後重新載入
```
// 編輯環境變數
nano ~/.bashrc
// 加入下面三行, 不過這個跟一開始的設定有點重複, 可以合併使用, 重複地不用再寫一次
// darknet的路徑記得改成自己的
export DARKNET_HOME=/your_PATH_to_darknet/
export CUDA_HOME=/usr/local/cuda-10.0/
export PATH=${DARKNET_HOME}:${CUDA_HOME}bin:${PATH}
// 載入
source ~/.bashrc
```
* 下載YOLO3-4-Py, 設定, 編譯, 跑範例程式
```
git clone https://github.com/madhawav/YOLO3-4-Py
// 設定GPU參數, 直接在終端機打下列兩行 (兩個指令)
export GPU=1
export OPENCV=1
// 開始安裝
python3 setup.py build_ext --inplace
pip3 install .
```
* 執行範例程式
```
python3 webcam_demo.py
```
* 在webcam的範例程式裡, 有設定一些設定檔的資訊: yolov3.cfg, yolov3.weights, coco.data
* 記得改成上面跑的tiny-yolo的檔案, 也要注意檔案的路徑
```
net = Detector(bytes("cfg/yolov3.cfg", encoding="utf-8"), bytes("weights/yolov3.weights", encoding="utf-8"), 0,
bytes("cfg/coco.data", encoding="utf-8"))
```
* 實測結果:
* 一開始跑這個python時, GPU的設定沒有弄好, 變成單純用CPU跑yolo+python, 當時一張frame需要約4秒時間處理
* 後來GPU的參數設好, 一張frame約0.14, 設定3 FPS我覺得視覺感受還可以接受 (單純站在監控的角度來看 XD)
## 測試 CSI camera 畫面
* 在終端機打這個指令, 可以看到攝影機的畫面
```
gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=3820, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=0 ! 'video/x-raw,width=960, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e
```
## 參考資料
* https://me.csdn.net/beckhans
* https://www.jetsonhacks.com/2019/04/19/jetson-nano-serial-console/
* https://pjreddie.com/darknet/yolo/
* https://denor.jp/jetson-nano%E3%81%A7gpu%E3%81%A8opencv%E3%81%8C%E6%9C%89%E5%8A%B9%E3%81%AAyolo%E3%82%92%E3%83%93%E3%83%AB%E3%83%89%E3%81%99%E3%82%8B%E3%81%AB%E3%81%AF
* https://chtseng.wordpress.com/2018/10/08/%E5%A6%82%E4%BD%95%E5%9C%A8python%E7%A8%8B%E5%BC%8F%E4%B8%AD%E4%BD%BF%E7%94%A8yolo/
* https://github.com/madhawav/YOLO3-4-Py