# 【ML/DL】Jetson AGX Orin 開發筆記 ## Appearance ![image](https://hackmd.io/_uploads/SkO1GQCRkg.png =50%x) ## Technical Specifications ![image](https://hackmd.io/_uploads/SJoDtHjC1l.png) ## JetPack SDK Introduce JetPack SDK 為 Nvidia 提供的 Jetson 模組,可以讓開發者用最快速度建置 AI 相關軟體 ## JetPack Intro 1. Jetson Linux 2. Jetson AI Stack 包含一系列AI加速的套件包含多媒體、圖形處理、電腦視覺相關 * [Metropolis](https://www.nvidia.com/en-us/autonomous-machines/intelligent-video-analytics-platform/) * [Isaac](https://developer.nvidia.com/isaac) * [Holoscan](https://developer.nvidia.com/holoscan-sdk) 3. Jetson Platform Services ### Key Feature 1. Jetson Linux 包含 Linux Kernel,UEFI Bootloader,NVIDIA drivers 等等 2. Jetson Platform Services 提供預編與雲端原生軟體服務,可以快速建構生成式 AI 與其他邊緣應用 3. TensorRT 模型推理引擎,專門在 NV GPU上使用軟體庫 4. CUDA 軟硬體整合技術,提供開發者利用 GPU 進行運算 5. cuDNN 深度神經網路加速庫,用於加速訓練與推理 6. Multimedia API - GStreamer 7. Computer Vision - VPI (Vison Programing interface) 軟體套件,提供電腦視覺演算法,可使用 GPU 進行加速 - Dynamic Remap : CUDA Support Geometrical Transform 8. Graphics - Vulkan - OpenGL 9. Nsight Developer Tools ## JetPack Install NVIDIA JetPack SDK,安裝 JetPack 就會自動安裝 CUDA 、 cuDNN 、 TensorRT 、 OpenCV > JetPack 所安裝的 OpenCV 是沒有 CUDA 支援的,需要手動安裝並編譯 ```bash! sudo apt update sudo apt dist-upgrade sudo reboot sudo apt install nvidia-jetpack ``` ## Jetson CLI ### Jetson Stats 觀察 Jetson 系統資訊與使用率工具 ```bash! sudo apt install python3-pip sudo -H pip3 install -U jetson-stats ``` ### Jtop 查看 CPU、GPU 使用率以及記憶體資訊 ```bash! sudo jtop ``` ![image](https://hackmd.io/_uploads/SJU804AR1x.png) ### Jetson Release 查看 Nvidia Jetson 相關資訊 * `-v` 顯示所有變數 * `-s` 顯示 Serial Number ```bash! sudo jetson_release -v ``` ![image](https://hackmd.io/_uploads/rJfncVRRJl.png) ### Power Model and Clock 把 NVIDIA Jetson 裝置的電源模式設定為 「模式 0」,通常代表 最**高效能** 或 **預設效能模式**,具體依設備而異 ```bash sudo nvpmodel -m 0 sudo jetson_clocks ``` ## VRNC Setting Reference - [nvidia-vnc-setup](https://developer.nvidia.com/embedded/learn/tutorials/vnc-setup) - [Remote Control Jetson Xavier NX via VNC](/x7vkdHNkRCKJ5b-_4S9KhQ) ## Build OpenCV with CUDA 請參照下面的 github 專案,將專案下載到本地端後,執行就可以順利將 OpenCV CUDA 版本進行安裝 - [OpenCV build script for Tegra](https://github.com/mdegans/nano_build_opencv/tree/master) ![opencv_cude_successful](https://hackmd.io/_uploads/SkZmKmsR1e.png) ## GStreamer ### Check device ```bash v4l2-ctl --list-devices ``` ![image](https://hackmd.io/_uploads/rJShyHR0yx.png) ### List current camera support format ```bash v4l2-ctl --device=/dev/video0 --list-formats-ext ``` ![image](https://hackmd.io/_uploads/ryIGgBCC1g.png) ### Open Camera and Show Image ```bash! gst-launch-1.0 v4l2src device=/dev/video0 ! 'image/jpeg, width=640, height=480, framerate=30/1' ! jpegdec ! autovideosink ``` ### Share Memory Sink 使用 GStreamer 將影像資料輸入到共享記憶體中 ```bash! gst-launch-1.0 v4l2src device=/dev/video0 ! image/jpeg, width=640, height=480 ! jpegdec ! videoconvert ! video/x-raw, format=BGR ! shmsink socket-path=/tmp/camfeed.shm sync=false wait-for-connection=false ``` 使用 OpenCV 開啟共享記憶體空間並讀取影像顯示 ```cpp! int main(int argc, char** argv) { try{ std::string pipeline = "shmsrc socket-path=/tmp/camfeed.shm do-timestamp=true ! video/x-raw, format=BGR, width=640, height=480, framerate=30/1 ! videoconvert ! appsink"; cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER); if (!cap.isOpened()) { std::cerr << "Cannot open shared memory stream!" << std::endl; return -1; } cv::Mat frame; while (true) { if (!cap.read(frame)) { std::cerr << "Failed to read frame!" << std::endl; break; } cv::imshow("Shared Memory Camera Feed", frame); if (cv::waitKey(1) == 27) break; // esc } } catch (const cv::Exception& e) { std::cerr << e.what() << "\n"; } return 0; } ``` ## NVIDIA Performance Primitives (NPP) NVIDIA 提供的高度最佳化 GPU 加速函式庫,應用在影像處理、電腦視覺、計算機處理等資料密集型的任務,下面是簡單的一個彩色影像轉成灰階影像例子,過程基本上就是創建記憶體空間,複製記憶體空間,進行運算,結束運算後轉移到 CPU 記憶體空間 ### RGB2Gray NPP Example ```cpp! Npp8u* gpuSrc = nullptr; Npp8u* gpuDst = nullptr; cudaMalloc(&gpuSrc, sizeof(Npp8u) * height * srcStep); cudaMalloc(&gpuDst, sizeof(Npp8u) * height * dstStep); cudaMemcpy(gpuSrc, rgbImage.data, sizeof(Npp8u) * height * srcStep, cudaMemcpyHostToDevice); NppiSize roi = {width, height}; nppiRGBToGray_8u_C3C1R(gpuSrc, srcStep, gpuDst, dstStep, roi); cv::Mat grayImage(height, width, CV_8UC1); cudaMemcpy(grayImage.data, gpuDst, sizeof(Npp8u) * height * dstStep, cudaMemcpyDeviceToHost); ``` ## TensorRT 指令 `trtexec` 能夠將 `.onnx` 模型轉換成在 Jetson 上運行的 `engine` 檔案,專屬於 TensorRT 可以使用的模型格式 ```bash! /usr/src/tensorrt/bin/trtexec --onnx=model.onnx --saveEngine=model.engine --fp16 ``` ## YOLO Inference ### Setup CMakeFile.txt ```cmake! cmake_minimum_required(VERSION 3.16.3) project(object_detect LANGUAGES CXX CUDA) # --------------------- Basic Configuration --------------------- enable_language(CUDA) ..... # --------------------- Package Discovery --------------------- # CUDA find_package(CUDAToolkit REQUIRED) message(STATUS "CUDA Include Path: ${CUDA_INCLUDE_DIRS}") # TensorRT (Jetson default location) set(TENSORRT_INCLUDE_DIR /usr/include/aarch64-linux-gnu) set(TENSORRT_LIB_DIR /usr/lib/aarch64-linux-gnu) set(TENSORRT_LIBS nvinfer nvinfer_plugin nvparsers nvonnxparser) message(STATUS "TensorRT Include Path: ${TENSORRT_INCLUDE_DIR}") ``` ## Reference 1. [nvidia-vpi](https://developer.nvidia.com/embedded/vpi)