# [09/12] Jetson Nano
## 10:30 - 11:50
### Jetson Nano 安裝與測試
#### [Jetson-stats](https://github.com/rbonghi/jetson_stats)
```
$ sudo apt install python-pip python3-pip
$ sudo pip3 install jetson-stats
###### Reboot 重開機
# Open the jtop prompt interface
$ jtop
# Show the status and all information about your NVIDIA Jetson NANO
$ jetson_release
# Another command for showing the status of Jetson NANO
$ export | grep JETSON
# Ubuntu version
$ lsb_release -a
```
#### CUDA
```
# Copy cuda samples to root directory
cp -a /usr/local/cuda-10.2/samples/ ~/
```
```
# Test CUDA using deviceQuery test tool
cd ~/samples/1_Utilities/deviceQuery
make
./deviceQuery
# Run CUDA Sample, OceanFFT
cd ~/samples/5_Simulations/oceanFFT
make
./oceanFTT
# Run CUDA Sample, smokeParticles (take about ~1 min to compile it.)
cd ~/samples/5_Simulations/smokeParticles/
make
./smokeParticles
# Run n-body simulation
cd ~/samples/5_Simulations/nbody/
make
./nbody
# You can follow the example steps above, and run other applications
```
#### VisionWorks
```
# For more practical usage of GPU, here are some demostrations of visionworks.
# Ref: https://developer.nvidia.com/embedded/visionworks
# Copy samples
mkdir ~/visionworks
/usr/share/visionworks/sources/install-samples.sh ~/visionworks/
# Hough detecion
cd ~/visionworks/VisionWorks-1.6-Samples/demos/hough_transform/
make
~/visionworks/VisionWorks-1.6-Samples/bin/aarch64/linux/release/nvx_demo_hough_transform
# Motion estimation
cd ~/visionworks/VisionWorks-1.6-Samples/demos/motion_estimation/
make
~/visionworks/VisionWorks-1.6-Samples/bin/aarch64/linux/release/nvx_demo_motion_estimation
# You can follow the example steps above, and run other applications
```
---
## 15:20 - 16:00
### 遷移學習Transfer learning --- Classification
#### 0. Enviroment
```
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh
# exit docker
$ exit
```
#### 1. Download the Cat/Dog dataset
```
$ cd jetson-inference/python/training/classification/data
# Download the dataset
$ wget https://nvidia.box.com/shared/static/o577zd8yp3lmxf5zhm38svrbrv45am3y.gz -O cat_dog.tar.gz
# Untar the dataset
$ tar xvzf cat_dog.tar.gz
```
#### 2. Train the model
```
$ cd jetson-inference/python/training/classification
$ python3 train.py --model-dir=models/cat_dog data/cat_dog
```
>* Note : 停止訓練可按 Ctrl+C
#### 3. Load trained 100 epoch model
[**Download Link**](https://nvidia.app.box.com/s/zlvb4y43djygotpjn6azjhwu0r3j0yxc)
```
# In ~/Downloads/
# Copy the model
$ cp cat_dog_epoch_100.tar.gz ~/jetson-inference/python/training/classification/models/cat_dog
# Untar the model
$ cd /jetson-inference/python/training/classification/models/cat_dog
$ tar xvzf cat_dog_epoch_100.tar.gz
# Evaluate the model
$ cd /jetson-inference/python/training/classification
$ python3 train.py --resume=models/cat_dog/cat_dog_epoch_100/model_best.pth.tar data/cat_dog --evaluate
```
#### 4. Converting the Model to ONNX
[Open Neural Network eXchange](https://onnx.ai/) (ONNX) is an open standard format for representing machine learning models.
```
$ python3 onnx_export.py --model-dir=models/cat_dog/cat_dog_epoch_100
```
This will create a model called ```resnet18.onnx``` under ```jetson-inference/python/training/classification/model/cat_dog/cat_dog_epoch_100/```
#### 5. Processing Images with TensorRT
[TensorRT](https://developer.nvidia.com/tensorrt) is an SDK for high-performance deep learning inference
```
$ imagenet.py --model=models/cat_dog/cat_dog_epoch_100/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/cat_dog/labels.txt /jetson-inference/data/images/cat_1.jpg /jetson-inference/data/images/test/cat_1.jpg
```
#### 6. Running the Live Camera with TensorRT
```
$ imagenet.py --model=models/cat_dog/cat_dog_epoch_100/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/cat_dog/labels.txt /dev/video0
```
#### 7. Download the models
```
$ cd /jetson-inference/tools
$ ./download-models.sh
# Your models is stored in /jetson-inference/data/network
```
---