{%hackmd SybccZ6XD %}
# Jetson nano setup
- Acount and password
- Email
ntustuser701@gmail.com
Q——y1——3!
- System
1.user 123
# SDKmanger 安裝(ubuntu ver18.04 or ver20.04 required)
https://docs.nvidia.nvidiajetson/jetpack/install-jetpack/index.html
參考這個:https://docs.nvidia.com/sdk-manager/install-with-sdkm-jetson/index.html

> 建立一個Ubuntu系統(18.04)—>**HOST**
> 從系統再拉usb到jetson透過SDKmanager來安裝
> **Notes :
1.Burn in時候jetson必須切換(microUSB 旁邊的switch)
2.Burn in完後jetson必須切換回來才可以正常開關機**
# M.2 Format
https://www.forecr.io/blogs/bsp-development/change-root-file-system-to-m-2-ssd-directly
因為系統讀不到所以format為ext4
inux格式化指令:
> gnome-disks
# 參考資料(helloAI):
https://github.com/dusty-nv/jetson-inference#system-setup
# Remote access
Step1 : Jetson nano system code required:
> sudo apt-get update
> sudo apt-get install nano
> sudo apt-get install xrdp
> sudo apt-get install xfce4
> echo xfce4-session > ~/.xsession
Step2 :
> sudo nano /etc/xrdp/startwm.sh
> 加入“startxfce4”在最後一行 --> Ctrl-X , Y , Enter (儲存離開)
> sudo service xrdp restart
Step3 : 確認Nano IP 以及 連線
> IP(nano): 140.118.115.36
> Username: user
> Password: 123

# System info
>sudo pip3 install -U jetson-stats
>jtop
https://github.com/rbonghi/jetson_stats
風扇手動全轉
>sudo sh -c 'echo 255 > /sys/devices/pwm-fan/target_pwm'
# Camera check
IPCam:
```linux=
video-viewer rtsp://admin:123@140.118.115.65:554
```
USBCam
```linux=
video-viewer v4l2:///dev/video0
```
# Testing and problems
1A. Picture HelloAI現有的 要去./jetson-inference/....(會有個執行檔)
```bash=
【C++】./imagenet --network=resnet-18 images/jellyfish.jpg images/test/output_jellyfish.jpg
```
1B. Picture (自己創.cpp)
> 參考:https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-example-2.md
> PASS FILE: https://drive.google.com/drive/folders/1Ysa01tHuDvW6usEV5yunuRpEnIpeKq_E?usp=share_link
my-regconition.cpp
```C++
#include <jetson-inference/imageNet.h>
#include <jetson-utils/loadImage.h>
int main( int argc, char** argv )
{
// a command line argument containing the image filename is expected,
// so make sure we have at least 2 args (the first arg is the program)
if( argc < 2 )
{
printf("my-recognition: expected image filename as argument\n");
printf("example usage: ./my-recognition my_image.jpg\n");
return 0;
}
// retrieve the image filename from the array of command line args
const char* imgFilename = argv[1];
// these variables will store the image data pointer and dimensions
uchar3* imgPtr = NULL; // shared CPU/GPU pointer to image
int imgWidth = 0; // width of the image (in pixels)
int imgHeight = 0; // height of the image (in pixels)
// load the image from disk as uchar3 RGB (24 bits per pixel)
if( !loadImage(imgFilename, &imgPtr, &imgWidth, &imgHeight) )
{
printf("failed to load image '%s'\n", imgFilename);
return 0;
}
// load the GoogleNet image recognition network with TensorRT
// you can use imageNet::RESNET_18 to load ResNet-18 model instead
imageNet* net = imageNet::Create(argc, argv);
// check to make sure that the network model loaded properly
if( !net )
{
printf("failed to load image recognition network\n");
return 0;
}
// this variable will store the confidence of the classification (between 0 and 1)
float confidence = 0.0;
// classify the image, return the object class index (or -1 on error)
const int classIndex = net->Classify(imgPtr, imgWidth, imgHeight, &confidence);
// make sure a valid classification result was returned
if( classIndex >= 0 )
{
// retrieve the name/description of the object class index
const char* classDescription = net->GetClassDesc(classIndex);
// print out the classification results
printf("image is recognized as '%s' (class #%i) with %f%% confidence\n",
classDescription, classIndex, confidence * 100.0f);
}
else
{
// if Classify() returned < 0, an error occurred
printf("failed to classify image\n");
}
// free the network's resources before shutting down
delete net;
return 0;
}
```
CMakeLists.txt
```cmake
# require CMake 2.8 or greater
cmake_minimum_required(VERSION 2.8)
# declare my-recognition project
project(my-recognition)
# import jetson-inference and jetson-utils packages.
# note that if you didn't do "sudo make install"
# while building jetson-inference, this will error.
find_package(jetson-utils)
find_package(jetson-inference)
# CUDA is required
find_package(CUDA)
# add directory for libnvbuf-utils to program
link_directories(/usr/lib/aarch64-linux-gnu/tegra)
# compile the my-recognition program
cuda_add_executable(my-recognition my-recognition.cpp)
# link my-recognition to jetson-inference library
target_link_libraries(my-recognition jetson-inference)
```
結果:
```linux=
./my-recognition my_image.jpg
```

- Live Demo
參考:https://github.com/dusty-nv/jetson-inference/blob/master/docs/imagenet-camera-2.md
2A. Camera detect (HelloAI)(usb)
```bash
【C++】./imagenet /dev/video0
```
結果

2B. Camera detection (自己的)
- Load ONNX (Check later)
https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md#processing-images-with-tensorrt
> detectnet --model=models/fruit/ssd-mobilenet.onnx --labels=models/fruit/labels.txt \
--input-blob=input_0 --output-cvg=scores --output-bbox=boxes \
csi://0
# TO DO
1.Running live camera with detection (pretrained model)
2.Image and video test with our simple model (.pth > .onnx)
>how to load model in C++
>(https://forums.developer.nvidia.com/t/how-to-load-object-detection-custom-data-trained-model-in-c/226849)
>Pytorch to onnx
>https://d246810g2000.medium.com/nvidia-jetson-nano-for-jetpack-4-4-03-%E8%BD%89%E6%8F%9B%E5%90%84%E7%A8%AE%E6%A8%A1%E5%9E%8B%E6%A1%86%E6%9E%B6%E5%88%B0-onnx-%E6%A8%A1%E5%9E%8B-17adcece9c34
>
3.Running live camera with our model
4.library : https://rawgit.com/dusty-nv/jetson-inference/master/docs/html/files.html
# Notes
pytorch-cpp:
https://github.com/prabhuomkar/pytorch-cpp
https://jumpml.com/howto-pytorch-c++/output/