# Jetson Nano Day 1
Contact: lytsao@gapp.nthu.edu.tw
## 11:30 - 12:00
### Jetson Nano 安裝與測試 [[Demo Video]](https://drive.google.com/file/d/1Oay505A_k2u9n_3QTP2JEemY5nWif1y8/view?usp=sharing)
* 請先把所有裝置(滑鼠、鍵盤、HDMI等)都插到 Jetson Nano 上後再插上電源開機
* 在 terminal 內貼上指令:Ctrl **+ shift** + v
[jetson-stats](https://github.com/rbonghi/jetson_stats)
```
$ sudo apt install python-pip python3-pip
$ sudo pip3 install jetson-stats # If this doesn't work, try reboot (在 ternimal 裡面打 reboot 指令即可重開機)
# Open the jtop prompt interface
$ jtop
# Show the status and all information about your NVIDIA Jetson NANO
$ jetson_release
# Another command for showing the status of Jetson NANO
$ export | grep JETSON
# Ubuntu version
$ lsb_release -a
```
### CUDA
```
# Copy cuda samples to root directory
cp -a /usr/local/cuda-10.2/samples/ ~/
```
```
# Test CUDA using deviceQuery test tool
cd ~/samples/1_Utilities/deviceQuery
make
./deviceQuery
# Run CUDA Sample, OceanFFT
cd ~/samples/5_Simulations/oceanFFT
make
./oceanFTT
# Run CUDA Sample, smokeParticles (take about ~1 min to compile it.)
cd ~/samples/5_Simulations/smokeParticles/
make
./smokeParticles
# Run n-body simulation
cd ~/samples/5_Simulations/nbody/
make
./nbody
# You can follow the example steps above, and run other applications
```
### VisionWorks
```
# For more practical usage of GPU, here are some demostrations of visionworks.
# Ref: https://developer.nvidia.com/embedded/visionworks
# Back to the home directory
cd ~
# Copy samples
mkdir ~/visionworks
/usr/share/visionworks/sources/install-samples.sh ~/visionworks/
# Hough detecion
cd ~/visionworks/VisionWorks-1.6-Samples/demos/hough_transform/
make
~/visionworks/VisionWorks-1.6-Samples/bin/aarch64/linux/release/nvx_demo_hough_transform
# Motion estimation
cd ~/visionworks/VisionWorks-1.6-Samples/demos/motion_estimation/
make
~/visionworks/VisionWorks-1.6-Samples/bin/aarch64/linux/release/nvx_demo_motion_estimation
# You can follow the example steps above, and run other applications
```
---
## 15:20 - 16:00
### 遷移學習 Transfer learning — Classification [[Demo Video](https://drive.google.com/file/d/1Mwd0tXlt1toAAWgJVdvVPGMSEuFrRqSw/view?usp=sharing)]
**0. Enviroment**
```
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ docker/run.sh
# exit docker
$ exit
```
**1. Download the Cat/Dog dataset**
```
$ cd ~/jetson-inference/python/training/classification/data
# Download the dataset
$ wget https://nvidia.box.com/shared/static/o577zd8yp3lmxf5zhm38svrbrv45am3y.gz -O cat_dog.tar.gz
# Untar the dataset
$ tar xvzf cat_dog.tar.gz
```
**2. Train the model**
```
# Environment
$ cd ~/jetson-inference
$ docker/run.sh
$ cd python/training/classification
$ python3 train.py --model-dir=models/cat_dog data/cat_dog
```
> Note : 停止訓練可按 Ctrl+C
**3. Load trained 100 epoch model**
[Download Link](https://nvidia.app.box.com/s/zlvb4y43djygotpjn6azjhwu0r3j0yxc)
```
# exit docker
$ exit
# Copy the model
$ sudo cp ~/Downloads/cat_dog_epoch_100.tar.gz ~/jetson-inference/python/training/classification/models/cat_dog
# Untar the model
$ cd ~/jetson-inference/python/training/classification/models/cat_dog
$ sudo tar xvzf cat_dog_epoch_100.tar.gz
# Environment
$ cd ~/jetson-inference
$ docker/run.sh
# Evaluate the model
$ cd /jetson-inference/python/training/classification
$ python3 train.py --resume=models/cat_dog/cat_dog_epoch_100/model_best.pth.tar data/cat_dog --evaluate
```
**4. Converting the Model to ONNX**
[Open Neural Network eXchange](https://onnx.ai/) (ONNX) is an open standard format for representing machine learning models.
```
$ python3 onnx_export.py --model-dir=models/cat_dog/cat_dog_epoch_100
```
This will create a model called resnet18.onnx under jetson-inference/python/training/classification/model/cat_dog/cat_dog_epoch_100/
**5. Processing Images with TensorRT**
[TensorRT](https://developer.nvidia.com/tensorrt) is an SDK for high-performance deep learning inference
```
$ imagenet.py --model=models/cat_dog/cat_dog_epoch_100/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/cat_dog/labels.txt /jetson-inference/data/images/cat_1.jpg /jetson-inference/data/images/test/cat_1.jpg
```
**6. Running the Live Camera with TensorRT**
```
$ imagenet.py --model=models/cat_dog/cat_dog_epoch_100/resnet18.onnx --input_blob=input_0 --output_blob=output_0 --labels=data/cat_dog/labels.txt /dev/video0
```
**7. Download the models**
```
$ cd /jetson-inference/tools
$ ./download-models.sh
# Your models is stored in /jetson-inference/data/network
```
---
## Jetson Nano Image Installation
**1. Enter the [Jetson Download Center](https://developer.nvidia.com/embedded/downloads)**

**2. Choose "Product" and set the filter to "Jetson Nano"**
<img src="https://i.imgur.com/BIchhTM.png" height="350px" width="200px" ><br />
**3. Download "Jetson Nano Developer Kit SD Card Image "**
