YH Hsu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    2
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Accelerate multi-streaming cameras with DeepStream and deploy custom (YOLO) models<br>使用DeepStream加速多串流攝影機並部署客製(YOLO)模型 ###### tags: `Edge AI` `DeepStream` `Edge_AI` `deployment` `Nvidia` `Jetson` ![](https://i.imgur.com/S5QFcnE.png =800x) ## NVIDIA Jetson 平台部署相關筆記 ### 基本環境設定 - [Jetson AGX Xavier 系統環境設定1_在windows10環境連接與安裝](https://hackmd.io/@YungHuiHsu/HJ2lcU4Rj) - [Jetson AGX Xavier 系統環境設定2_Docker安裝或從源程式碼編譯](https://hackmd.io/k-lnDTxVQDWo_V13WEnfOg) - [NVIDIA Container Toolkit 安裝筆記](https://hackmd.io/wADvyemZRDOeEduJXA9X7g) - [Jetson 邊緣裝置查詢系統性能指令jtop](https://hackmd.io/VXXV3T5GRIKi6ap8SkR-tg) - [Jetson Network Setup 網路設定](https://hackmd.io/WiqAB7pLSpm2863N2ISGXQ) - [OpenCV turns on cuda acceleration in Nvidia Jetson platform<br>OpenCV在Nvidia Jetson平台開啟cuda加速](https://hackmd.io/6IloyiWMQ_qbIpIE_c_1GA) ### 模型部署與加速 - [[Object Detection_YOLO] YOLOv7 論文筆記](https://hackmd.io/xhLeIsoSToW0jL61QRWDcQ) - [Deploy YOLOv7 on Nvidia Jetson](https://hackmd.io/kZftj6AgQmWJsbXsswIwEQ) - [Convert PyTorch model to TensorRT for 3-6x speedup<br>將PyTorch模型轉換為TensorRT,實現3-8倍加速](https://hackmd.io/_oaJhYNqTvyL_h01X1Fdmw?both) - [Accelerate multi-streaming cameras with DeepStream and deploy custom (YOLO) models<br>使用DeepStream加速多串流攝影機並部署客製(YOLO)模型](https://hackmd.io/@YungHuiHsu/rJKx-tv4h) - [Use Deepstream python API to extract the model output tensor and customize model post-processing (e.g., YOLO-Pose)<br>使用Deepstream python API提取模型輸出張量並定製模型后處理(如:YOLO-Pose)](https://hackmd.io/@YungHuiHsu/rk41ISKY2) - [Model Quantization Note 模型量化筆記](https://hackmd.io/riYLcrp1RuKHpVI22oEAXA) --- ### yolov7 with multiple cameras running on DeepStream {%youtube 5BVrPbOyNHE%} --- ## Github 開箱即用 - [github/deepstream-yolov7](https://github.com/YunghuiHsu/deepstream_python_apps/tree/master/apps/deepstream-rtsp-in-rtsp-out) - [github/deepstream-yolo-pose](https://github.com/YunghuiHsu/deepstream-yolo-pose) ![](https://github.com/YunghuiHsu/deepstream-yolo-pose/blob/main/imgs/Multistream_4_YOLOv8s-pose-3.PNG?raw=true =600x) --- ## 簡介 > Nvidia DeepStream是一個基於GStreamer的流分析工具包,可用於多傳感器處理、視頻、音頻和圖像理解等多種用途。 目標是提供一個完整的流處理框架,基於AI和計算機視覺技術,以實現對傳感器數據的即時處理和分析 > - 詳見官方介紹[DeepStream SDK | NVIDIA Developer](https://developer.nvidia.com/deepstream-sdk) 白話來說,DeepStream是Nvidia是作為AI部屬平台的加速工具,主要在串接模型資料傳輸處理與協定上進行加速,特別是在多媒體串流處理上更有使用的必要性 ![](https://hackmd.io/_uploads/r19WxcuE2.png =800x) ![](https://hackmd.io/_uploads/B1dXg9_V3.png =800x) ### DeepStream 應用架構解析 [DeepStream Reference Application - deepstream-app](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#application-architecture) #### DeepStream組件 ![](https://hackmd.io/_uploads/r1BI7wuB2.png =800x) Nvidia DeepStream是一個人工智能框架,有助於利用Jetson和GPU設備中的Nvidia GPU的最終潛力來實現計算機視覺。它為Jetson Nano等邊緣設備和Jetson系列的其他設備提供動力,實時處理邊緣設備上的並行視頻流。 DeepStream使用Gstreamer流水線(用C語言編寫)在GPU中獲取輸入視頻,最終以更快的速度處理它,以便進一步處理。 - DeepStream的組成部分 DeepStream有一個基於插件的架構。基於圖形的管道接口允許高層組件互連。它可以在GPU和CPU上使用多線程進行異質並行處理(heterogeneous parallel processing)。 - 下面是DeepStream的主要組件和它們的高級功能 | 名稱 | 說明 | | --------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | | 元數據<br>Meta Dat | 它是由圖形生成的,在圖形的每個階段都會生成。利用它,我們可以得到許多重要的字段,如檢測到的物體類型、ROI坐標、物體分類、來源等。 | | 解碼器<br>Decoder | 解碼器有助於對輸入影片(H.264和H.265)進行解碼。它支持多數據流(multi-stream)同時解碼。它將位元深度(Bit depth)和分辨率(Resolution)作為參數。 | | 影片聚合器<br>Video Aggregator<br>(nvstreammux) | 它有助於接受n個輸入流並將其轉換為連續的批量幀(sequential batch frames)。它使用低級別的API來訪問GPU和CPU來完成這個過程。 | | 推理 <br>Inferencing<br>(nvinfer) | 這是用來獲得所使用的模型的推斷。所有與模型相關的工作都是通過nvinfer完成的。它還支持一級和二級模式以及各種聚類方法。 | | 格式轉換和縮放<br>Format Conversion and Scaling<br>(nvvidconv) | 它將格式從YUV轉換為RGBA/BRGA,縮放分辨率並做圖像旋轉部分。 | | 對象追蹤器<br>Object Tracker<br>(nvtracker) | 它使用CUDA並基於KLT參考實現。我們也可以用其他追蹤器來替換默認的追蹤器。 | | 屏幕追蹤器<br>Screen Tiler<br>(nvstreamtiler) | 它管理輸出的視頻,即相當於open cv的imshow函數。 | | 屏幕顯示<br>On Screen Display<br>(nvdosd) | 它管理屏幕上所有的可畫性,比如畫線、邊界框、圓圈、ROI等。 | | 訊息轉換器和代理器<br>(nvmsgconv + nvmsgbroker) | 組合在一起,將分析資料發送到雲中的伺服器 | (source : [Nvidia DeepStream – A Simplistic Guide](https://www.datatobiz.com/blog/nvidia-deepstream-guide/)) - Nvidia DeepStream的執行流程 Decoder -> Muxer -> Inference -> Tracker (if any) -> Tiler -> Format Conversion -> On Screen Display -> Sink DeepStream應用程序由兩部分組成,一部分是配置文件,另一部分是其驅動文件(可以是C語言或Python語言)。 在設定檔config.txt相關的控制項的說明在[`Configuration Groups`](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html#deepstream-reference-application-deepstream-app)文件內 #### 使用Gstreamer 命令列檢視組件功能 這邊需要先有Gstreamer的基本概念才看得懂:slightly_smiling_face: - [GStreamer 簡介與筆記](https://hackmd.io/@YungHuiHsu/ryhRTZpt3) 安裝完成gstreamer後,執行`gst-inspect-1.0`指令檢視模組功能,以`nvinfer`模組為例 ``` gst-inspect-1.0 nvinfer ``` ##### `nvinfer` 插件的屬性定義與功能 ![](https://hackmd.io/_uploads/rJPdndRYh.png =800x) ###### `Pad Templates` and `Pads` (數據傳輸接口) - 可以見到有`src`輸出(數據生產)與`sink`輸入(數據消費)的端點 - 支援的影片格式為`video/x-raw(memory:NVMM)` 預設使用NVIDIA的GPU處理 - 可選格式為 `NV12` 與`RGBA` - 這邊可見輸入模型預設的通道為`RGBA` ![](https://hackmd.io/_uploads/SyRghORYn.png =400x)   ###### `Element Properties` (元素屬性) 可以見到有許多參數的設定說明,這也是後面config file中設定的參數項目 ![](https://hackmd.io/_uploads/rJr96_0Fn.png =800x) - 也可在code中取得或進行設定 - 取得 `.get_property()` - 設定 `.set_property()` ``` # create "nvinfer" element pgie = Gst.ElementFactory.make("nvinfer", "primary-inference") # get "batch size" pgie.get_property("batch-size") # set "config file" pgie.set_property("config-file-path", config_file_path) ``` ## Python API(binding)使用入門 - [deepstream-get-started-with-python](https://resources.nvidia.com/en-us-deepstream-get-started-with-python/ds-python-sample-app) 以下主要參考官方提供的DeepStream Python Apps[NVIDIA-AI-IOT/deepstream_python_apps](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps)的範例,內有詳細執行範例 - DeepStream python biding 示意圖 ![](https://hackmd.io/_uploads/ByPrf9OEn.png) ### 環境安裝 按[NVIDIA-AI-IOT/deepstream_python_apps/blob/master/bindings/README.md](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/bindings/README.md)文件指引 :::spoiler #### 補充 執行到 步驟1.3 deepstream_python_apps倉儲安裝 按以下指示安裝 當執行到 "1.3 Initialization of submodules"時 要將"deepstream_python_apps"的倉儲clone到DeepStream根目錄/source目錄下`<DeepStream 6.2 ROOT>/sources`: ~~我這邊安裝的是deepstream 6.2版,可以用 `dpkg -L deepstream-6.2`指令查找安裝位置,按Linux系統慣例果然在/opt/之下,我查到的安裝位置是:`/opt/nvidia/deepstream/deepstream-6.2/`~~ ``` cd /opt/nvidia/deepstream/deepstream/sources git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps ``` #### 補充 執行步驟2 Compiling the bindings時出現錯誤 按照官方指示執行到2.1 Quick build (x86-ubuntu-20.04 | python 3.8 | Deepstream 6.2)時 ``` cd deepstream_python_apps/bindings mkdir build cd build cmake .. make ``` 按照官方預設指令先執行`make ..`,再接在執行`make`以後,會在`/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/bindings/build`下產出 'dist/pyds-1.1.6-py3-none-linux_x86_64.whl' 檔案,後面接著執行 ``` pip3 install ./pyds-1.1.6-py3-none*.whl ``` 會出現錯誤訊息"ERROR: pyds-1.1.6-py3-none-linux_x86_64.whl is not a supported wheel on this platform." - 錯誤原因分析 - 因為預設預設平台環境錯誤 > x86_64 is for dGPU, please add -DPIP_PLATFORM=linux_aarch64 when executing cmake. > by [nvidia開發者論壇](https://forums.developer.nvidia.com/t/error-pyds-1-1-0-py3-none-linux-x86-64-whl-is-not-a-supported-wheel-on-this-platform/248952) - 解決方案 - 執行cmake指令時,後面需指定運作的硬體環境,指令應該為 `cmake .. -DPIP_PLATFORM=linux_aarch64 ` ::: --- ### 範例 - [範例檔案使用說明](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html) - 官方文件內有提供各種應用情境範例的.py檔及配置文件,包含串聯不同模型、multistream、結合Triton或直接使用本機TRT直接推論 ![](https://hackmd.io/_uploads/SkGTJ0FVh.png =600x) --- ## 模型轉換 ### 模型轉換流程與格式 - Nvidia官方指引[Using the TensorRT Runtime API](https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#runtime) - 以pytorch > onnx > TensorRT 為例 - model.pt → model.onnx → ==model.engine== - TensorRT格式有 `.engine` 與 `model.trt`兩種 - `.engine`格式 - 是TensorRT模型的序列化表示形式,它存儲了經過優化和編譯的模型和相關參數 - DeepStream 所必需的讀入格式 - `model.trt`格式 - 實際上與.engine文件相同,是.engine文件的另一種後綴名稱,可以互相轉換使用 - DeepStream無法直接使用 - 在python環境中可以透過import `tensorrt`、`pycuda`等模組讀取二進位的.trt檔,對其序列化後使用 - 在DeepStream文件設定中,指定以`model-engine-file`讀取==.engine==參數配置的必要性 :::info 第一次啟用時,DeepStream app時會根據設定文件的配置,載入.onnx檔構建 TensorRT引擎後產出model.engine檔案,此後再指定“model-engine-file=model.engine”啟動就會快很多 - 在 DeepStream 中,如果已經指定了 "onnx-file",那麽指定 "model-engine-file" 的設置通常是可選的。 - "onnx-file" 參數用於指定 ONNX 模型文件的路徑,DeepStream 將根據該文件構建 TensorRT 引擎。這種方式需要在運行時動態地將 ONNX 模型轉換為 TensorRT 引擎。這可能需要一些額外的時間和計算資源來完成模型的轉換和優化,因為在每次運行應用程序時都需要進行這個過程(在AGX Xavier要20-30min)。 當指定 "model-engine-file" 參數時,它用於指定預先構建好的 TensorRT 引擎文件的路徑。預先構建引擎意味著將 ONNX 模型轉換為 TensorRT 引擎的過程已經提前執行,引擎文件已經生成並存儲在磁盤上。在應用程序運行時,DeepStream 將直接加載該引擎文件,而無需再次進行模型轉換和優化的過程。這可以節省啟動時間並加快執行速度。 如果指定了 "model-engine-file",DeepStream 將忽略 "onnx-file" 的設置,而直接加載和使用預先構建的引擎文件 ::: ### YOLOv7模型格式轉換(Onnx → TensorRT Engine) - 參見[NVIDIA-AI-IOT/yolo_deepstream/tensorrt_yolov7](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/tensorrt_yolov7) - 官方有提供客製的編譯檔與環境, 編譯環境準備請參考官方指示 #### 模型取得 :::spoiler [NVIDIA-AI-IOT/yolo_deepstream/yolov7_qat](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/yolov7_qat) - [NVIDIA-AI-IOT/yolo_deepstream/yolov7_qat](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/yolov7_qat)可直接下載量化過後的int8版本 - [x] Quantization Aware Training(QAT-INT8)訓練過的模型版本(`yolov7_qat_640.onnx`) - [Model Quantization Note 模型量化筆記](https://hackmd.io/riYLcrp1RuKHpVI22oEAXA) 下載後下一步要在自己的硬體平台先轉為TensorRT Engine格式 ::: #### 準備 TensorRT engines :::spoiler convert onnx model(.onnx) to TensorRT-engine(.engine ) - 轉換方式參考自[NVIDIA-AI-IOT/yolo_deepstream/ tensorrt_yolov7](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/tensorrt_yolov7)) - 由於後面流程要串接多路攝影機,因此我選擇轉為動態批次的設定 ``` # int8 QAT model, the onnx model with Q&DQ nodes /usr/src/tensorrt/bin/trtexec --onnx=yolov7_qat_640.onnx \ --saveEngine=yolov7QAT_640.engine --fp16 --int8 # if you want dynamic_patch for batch inference /usr/src/tensorrt/bin/trtexec --onnx=yolov7_qat_640.onnx \ --minShapes=images:1x3x640x640 \ --optShapes=images:12x3x640x640 \ --maxShapes=images:16x3x640x640 \ --saveEngine=yolov7QAT_640.engine --fp16 --int8 ``` - 如果後面串接的數量固定 - '--minShapes'、'--optShapes'、'--maxShapes'都設為一致,似乎可以開啟NV的[DLA (Deep Learning Accelerator)](https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla_topic)加速(待確認) ::: --- ## 使用DeepStream部署YOLO系列模型 主要流程參照NVIDIA官方文件[NVIDIA-AI-IOT/yolo_deepstream](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo) 另外這篇非官方的導覽也可以參考[marcoslucianops/DeepStream-Yolo/customModels](https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/customModels.md) ### 手動修改DeepStream Python Binding範例文件,並指定使用客製模型(YOLOv7) - 資料配置方式,在`/opt/nvidia/deepstream/deepstream`目錄內 ![](https://hackmd.io/_uploads/HJXX2V_B2.png =400x) 以下分兩部分來說明: #### 1. DeepStream 模型檔案配置與客製模型編譯 這裡採取的作法是把客製模型放在模型專用目錄下,方便未來其他專案重複取用 `samples/models/tao_pretrained_models/yolov7/` - 編譯過程詳見[NVIDIA-AI-IOT/yolo_deepstream/deepstream_yolo](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo)內的操作指示 :::spoiler - 詳細編譯過程與檔案配置位置 :::spoiler - 編譯與設定文件取得 - 為方便閱讀,這邊將`/opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/yolov7`指定為`path_yolov7`變數,方便之後取用 ``` cd ~/ sudo git clone https://github.com/NVIDIA-AI-IOT/yolo_deepstream.git path_yolov7=/opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/yolov7 sudo mmkdir $path_yolov7 sudo mcp -vr /yolo_deepstream/deepstream_yolo/* $path_yolov7/ ``` 這時可以看到`nvdsinfer_custom_impl_Yolo/`內的資料結構包含3個檔案 ``` nvdsinfer_custom_impl_Yolo/ ├── Makefile # 用於編譯程式碼的。包含了編譯和建構這個自訂推論實現所需的指令和規則。 ├── nvdsparsebbox_Yolo.cpp # C++ 檔案,包含了用於解析和處理YOLO模型的邊界框的程式碼 ├── nvdsparsebbox_Yolo_cuda.cu # CUDA 檔案,包含了在 GPU 上執行的加速程式碼 # 針對 YOLO 模型的邊界框解析進行加速計算的 CUDA 程式碼 # YOLO post-processing(decoce yolo result, not include NMS) ``` 進入`nvdsinfer_custom_impl_Yolo`目錄內開始編譯 ``` cd $path_yolov7/nvdsinfer_custom_impl_Yolo sudo make cd .. ``` 編譯成功後,就會得到1個主要的.so檔及2個編譯好的物件檔案(.o) ``` nvdsinfer_custom_impl_Yolo/ ├── Makefile ├── nvdsparsebbox_Yolo.cpp ├── nvdsparsebbox_Yolo_cuda.cu ├── libnvdsinfer_custom_impl_Yolo.so # 共享程式庫(shared library),用於執行時的動態連結。 # 這個程式庫提供了用於推論(inference)的自訂實現 ├── nvdsparsebbox_Yolo_cuda.o # 已編譯的 CUDA 目標檔案(object file),用於在連結階段 # 將 CUDA 代碼與其他目標檔案一起連結成最終的執行檔案或共享程式庫 └── nvdsparsebbox_Yolo.o # 已編譯的 C++ 目標檔案,用於在連結階段將 C++ 程式碼 # 與其他目標檔案一起連結成最終的執行檔案或共享程式庫 ``` ::: ::: #### 2. DeepStream app(python API)與配置文件 python biding的範例檔位於`/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps` ##### 2.1 這邊以[`deepstream_test1_rtsp_in_rtsp_out/`](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-rtsp-in-rtsp-out)為例 - [修改的檔案下載處](https://github.com/YunghuiHsu/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out/README) - `deepstream_test1_rtsp_in_rtsp_out_getconf.py` - `dstest1_pgie_inferserver_config.txt` :::spoiler - 資料輸入 - 這個範例中可以使用rtsp(使用`rtsp://`或`file:/`)讀入多種格式的影片檔案(.h264、.mp4、.mov等) - 資料輸出 - 使用rtsp接收 - 本機(直接在server/jetson上檢視) - 在瀏覽器輸入`rtsp://localhost:8554/ds-test` - 遠端連線檢視 - 我這邊使用的方案是VLC Player - 在"媒體/開啟網路"串流的設定中輸入指定的位址`rtsp://<your_server_ip>:8554/ds-test` - 相關設定參考[dusty-nv/jetson-inference/Camera Streaming and Multimedia](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-streaming.md) - ![](https://hackmd.io/_uploads/S1ZA3yqS2.png =400x) - 修改[`deepstream_test1_rtsp_in_rtsp_out.py`](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out.py#305)讀取設定檔的路徑 :::spoiler ```python=305 pgie.set_property("config-file-path", config_file_path) ``` ```python=397 def parse_args(): parser.add_argument("-config", "--config-file", default='dstest1_pgie_config.txt', help="Set the config file path", type=str) args = parser.parse_args() global config_file_path config_file_path = args.config_file ``` ::: ::: ##### 2.2 模型配置文件修改 - 參照[NVIDIA-AI-IOT/yolo_deepstream/deepstream_yoloconfig_infer_primary_yoloV7.txt](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/blob/main/deepstream_yolo/config_infer_primary_yoloV7.txt)的文件配置建立`dstest1_pgie_yolov7_config.txt` :::spoiler pgie_config.txt 客製模型配置文件說明 ![](https://hackmd.io/_uploads/HyiRigcB2.png) - pgie_config.txt 客製模型配置文件說明 以下說明改動部分,主要是跟模型存放路徑有關 - 客製模型檔案路徑 為方便閱讀,這邊以your_model_path 代替完整路徑`/opt/nvidia/deepstream/deepstream/samples/models/tao_pretrained_models/yolov7/` * `onnx-file=your_model_path/yolov7.onnx` * `model-engine-file= your_model_path/yolov7.onnx_b16_gpu0_fp16.engine` * 在第一次載入model.onnx後,DeepStream會動態地將 ONNX 模型轉換為 TensorRT 引擎(AGX Xavier耗時2-30min),然後自動產出已命名好的.engine檔案 * `labelfile-path=your_model_path/labels.txt` - 模型設定 * `batch-size ` * 模型轉為.onnxe格式時記得要開啟動態批次(),詳見前文[TensorRT轉檔啟用動態批次 Dynamic_batch](https://hackmd.io/_oaJhYNqTvyL_h01X1Fdmw?both)說明 * ~~至於在deepstream中該設多大呢? 參考前人經驗並非越大越好,還需要實際測試,但可以先抓跟你要輸入的camera數量接近~~ * 根據官方文件對於效能提升的提示[DeepStream best practices](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html?highlight=batch%20processing#deepstream-best-practices),批次大小設置與輸入源(batch == sources)一致,達到的效能提升效果最好 * `parse-bbox-func-name=NvDsInferParseCustomYoloV7_cuda` # 使用cuda做後處裡 * `custom-lib-path=your_model_path/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so` * 這邊放預先`make`編譯好的.so檔 * 詳細編譯過程與檔案配置見"DeepStream 模型檔案配置與編譯"小節 ::: :::spoiler dstest1_pgie_yolov7_config.txt 詳細配置說明 ```plaintext= [property] gpu-id=0 # 使用的GPU設備ID,這裡設定為0 net-scale-factor=0.0039215697906911373 # 圖像預處理的縮放因子,將像素值轉換為0到1之間的浮點數,這個值等於1/255 model-engine-file=../../../../samples/models/tao_pretrained_models/yolov7/yolov7.onnx_b16_gpu0_fp16.engine # 模型引擎文件的路徑和文件名,這是已經編譯好的TensorRT引擎文件 onnx-file=../../../../samples/models/tao_pretrained_models/yolov7/yolov7.onnx # 原始ONNX模型文件的路徑和文件名 labelfile-path=../../../../samples/models//tao_pretrained_models/yolov7/labels.txt # 包含類別標籤的文件路徑和文件名,每行包含一個類別標籤 force-implicit-batch-dim=1 # 強制隱式批次維度為1,用於不支援顯式批次維度的模型 batch-size=1 #batch-size=12 # 模型的批次大小,即一次送入模型推論的圖像數量 # 如果模型格式沒有設定動態批次的話請指定為1 # 批次大小應與輸入的stream數量相等以取得較好的推理效能 ## 0=FP32, 1=INT8, 2=FP16 mode network-mode=2 # 網路模型的運算精度模式,這裡設定為FP16模式 num-detected-classes=80 # 模型能夠檢測的目標類別數量 gie-unique-id=1 # DeepStream GIE (GPU Inference Engine) 的唯一ID network-type=0 # 網路模型的類型,這裡設定為0,表示物件檢測模型 #is-classifier=0 # 是否為分類器模型的標誌,這裡註解掉了,因此不使用分類器模型 ## 1=DBSCAN, 2=NMS, 3=DBSCAN+NMS Hybrid, 4=None(No clustering) cluster-mode=2 # 物件聚類模式的設定,這裡設定為NMS模式 maintain-aspect-ratio=1 # 是否保持圖像的長寬比例,這裡設定為保持比例 symmetric-padding=1 # 是否對圖像進行對稱填充,這裡設定為對稱填充 ## Bilinear Interpolation scaling-filter=1 # 圖像縮放時使用的插值方法,這裡設定為雙線性插值 #parse-bbox-func-name=NvDsInferParseCustomYoloV7 parse-bbox-func-name=NvDsInferParseCustomYoloV7_cuda # 物件檢測結果解析的函數名稱,這裡設定為NvDsInferParseCustomYoloV7_cuda #disable-output-host-copy=0 #disable-output-host-copy=1 # 是否禁用主機複製輸出,這裡註解掉了,因此未禁用複製輸出 custom-lib-path=../../../../samples/models/tao_pretrained_models/yolov7/nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so # 自定義的物件檢測實現庫的路徑和文件名 #scaling-compute-hw=0 # 圖像縮放計算硬體的設定,這裡註解掉了,因此未設定 ## start from DS6.2 crop-objects-to-roi-boundary=1 # 是否將物件裁剪到ROI邊界,這裡設定為是 [class-attrs-all] #nms-iou-threshold=0.3 #threshold=0.7 nms-iou-threshold=0.65 # 非最大抑制 (NMS) 的IoU閾值,用於去除重疊的檢測框 pre-cluster-threshold=0.25 # 物件聚類前的閾值,用於過濾低置信度的檢測框 topk=300 # 每張圖像最多保留的檢測框數量 ``` ::: --- ## Case Study [deepstream sdk-api](https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/struct\_\_NvDsUserMeta.html) ### 範例 :讓螢幕畫面的類別標籤展示信任分數confidence scores 完整的程式碼放在[YunghuiHsu/deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out](https://github.com/YunghuiHsu/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out) 流程如示意圖,主要在Metadata進入`nvdosd`物件(負責螢幕顯示工作)前加入Probe,告知`nvdosd`該如何顯示想要的資訊,螢幕追蹤資訊要如何呈現,則在`def tiler_src_pad_buffer_probe()`中定義 ![](https://hackmd.io/_uploads/HJqrgC6Hn.png =500x) ![](https://hackmd.io/_uploads/rJKax0pSh.png =400x) #### 取出並顯示信賴分數(Confidence score) ##### 在`def tiler_src_pad_buffer_probe()`中撈出meta資料 :::spoiler - `obj_meta.text_params.display_text` : 顯示meta資料文字 - `obj_meta.confidence` : meta物件資料中預定義提取信賴分數的關鍵字 - 當在設定檔中指定為偵測模型時,會解析偵測模型類別的meta物件格式 - [`deepstream_test1_rtsp_in_rtsp_out_getconf.py`](https://github.com/YunghuiHsu/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out_getconf.py#L112) ```python= def tiler_src_pad_buffer_probe(pad, info, u_data): batch_meta = pyds.gst_buffer_get_nvds_batch_meta(hash(gst_buffer)) l_frame = batch_meta.frame_meta_list while l_frame is not None: try: frame_meta = pyds.NvDsFrameMeta.cast(l_frame.data) except StopIteration: break l_obj = frame_meta.obj_meta_list while l_obj is not None: # Casting l_obj.data to pyds.NvDsObjectMeta obj_meta = pyds.NvDsObjectMeta.cast(l_obj.data) msg = f"{class_names[obj_meta.class_id]:s}" msg += f" {obj_meta.confidence:3.2f}" obj_meta.text_params.display_text = msg ``` 接下來還需要加入探針才能更新要顯示的資訊 ::: ##### 在`nvdosd`物件接口前加入探針(probe)更新要顯示的資訊 :::spoiler - [`deepstream_test1_rtsp_in_rtsp_out_getconf.py`](https://github.com/YunghuiHsu/deepstream_python_apps/blob/master/apps/deepstream-rtsp-in-rtsp-out/deepstream_test1_rtsp_in_rtsp_out_getconf.py#L411) ```python= # Add probe to get informed of the meta data generated, we add probe to # the sink pad of the osd element, since by that time, the buffer would have # had got all the metadata. # either nvosd.get_static_pad("sink") or pgie.get_static_pad("src") works osdsinkpad = nvosd.get_static_pad("sink") if not osdsinkpad. sys.stderr.write(" Unable to get sink pad of nvosd \n") osdsinkpad.add_probe(Gst.PadProbeType.BUFFER, tiler_src_pad_buffer_probe, 0) ``` - 執行範例 ``` python3 deepstream_test1_rtsp_in_rtsp_out_getconf.py \ -i file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 \ file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_qHD.mp4 \ -config dstest1_pgie_yolov7_config.txt ``` - Reference for display confidence scores - [How to include YOLOv4 confidence score in the deepstream-app output?](https://forums.developer.nvidia.com/t/how-to-include-yolov4-confidence-score-in-the-deepstream-app-output/214613) - [https://forums.developer.nvidia.com/t/how-to-display-confidence-with-label-in-deepstream-like-person-0-81/199878/9](https://forums.developer.nvidia.com/t/how-to-display-confidence-with-label-in-deepstream-like-person-0-81/199878/9) ::: ### 自定義傳遞訊息資料 #### 使用NvDsEventMsgMeta物件 - 可用於跟server間溝通傳遞訊息 <div style="text-align: center;"> <figure> <img src="https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/structNvDsEventMsgMeta__coll__graph.png" alt="structNvDsEventMsgMeta__coll__graph.png" width="8900"> <figcaption>structNvDsEventMsgMeta</figcaption> </figure> </div> ::: spoiler NvDsEventMsgMeta 資料結構中有pose的定義 - `/opt/nvidia/deepstream/deepstream-6.2/sources/includes/nvdsmeta\_schema.h` ```c++= /** * Holds event message meta data. * * You can attach various types of objects (vehicle, person, face, etc.) * to an event by setting a pointer to the object in @a extMsg. * * Similarly, you can attach a custom object to an event by setting a pointer to the object in @a extMsg. * A custom object must be handled by the metadata parsing module accordingly. */ typedef struct NvDsEventMsgMeta { /** Holds the event's type. */ NvDsEventType type; /** Holds the object's type. */ NvDsObjectType objType; /** Holds the object's bounding box. */ NvDsRect bbox; /** Holds the object's geolocation. */ NvDsGeoLocation location; /** Holds the object's coordinates. */ NvDsCoordinate coordinate; /** Holds the object's signature. */ NvDsObjectSignature objSignature; /** Holds the object's class ID. */ gint objClassId; /** Holds the ID of the sensor that generated the event. */ gint sensorId; /** Holds the ID of the analytics module that generated the event. */ gint moduleId; /** Holds the ID of the place related to the object. */ gint placeId; /** Holds the ID of the component (plugin) that generated this event. */ gint componentId; /** Holds the video frame ID of this event. */ gint frameId; /** Holds the confidence level of the inference. */ gdouble confidence; /** Holds the object's tracking ID. */ guint64 trackingId; /** Holds a pointer to the generated event's timestamp. */ gchar *ts; /** Holds a pointer to the detected or inferred object's ID. */ gchar *objectId; /** Holds a pointer to a string containing the sensor's identity. */ gchar *sensorStr; /** Holds a pointer to a string containing other attributes associated with the object. */ gchar *otherAttrs; /** Holds a pointer to the name of the video file. */ gchar *videoPath; /** Holds a pointer to event message meta data. This can be used to hold data that can't be accommodated in the existing fields, or an associated object (representing a vehicle, person, face, etc.). */ gpointer extMsg; /** Holds the size of the custom object at @a extMsg. */ guint extMsgSize; /** Holds the object's pose information */ NvDsJoints pose; /** Holds the object's embedding information */ NvDsEmbedding embedding; } NvDsEventMsgMeta; ``` ::: ### 自定義修改及撈取MetaData 關於客製meta data存放、修改與撈出 #### 使用自定義 NvDsUserMeta物件 <div style="text-align: center;"> <figure> <img src="https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/struct__NvDsUserMeta__coll__graph.png" alt="struct__NvDsUserMeta__coll__graph.png" width="500"> <figcaption>NvDsUserMeta</figcaption> </figure> </div> #### NvDsInferTensorMeta 在`[Gst-nvinfer`\]階段,可直接從TensorRT inference engine讀取原始(預測的)輸出張量,轉為meta格式 <div style="text-align: center;"> <figure> <img src="https://docs.nvidia.com/metropolis/deepstream/dev-guide/sdk-api/structNvDsInferTensorMeta__coll__graph.png" alt="structNvDsInferTensorMeta__coll__graph.png" width="300"> <figcaption>NvDsInferTensorMeta</figcaption> </figure> </div> ### 從Deepstream Buffer中取出影像與meta資料進行客製操作 * python biding相關範例見[apps/deepstream-imagedata-multistream](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/tree/master/apps/deepstream-imagedata-multistream) 從流程圖中可見,分別從FRAME BUFFER與<INFERENCE>模塊撈出影像與模型預測的張量 <div style="text-align: center;"> <figure> <img src="https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-imagedata-multistream-redaction/imagedata-app-block-diagram.png?raw=true" alt="imagedata-app-block-diagram.png" width="800"> <figcaption>imagedata diagram</figcaption> </figure> </div> - 更詳細解說見 [Use Deepstream python API to extract the model output tensor and customize model post-processing (e.g., YOLO-Pose)<br>使用Deepstream python API提取模型輸出張量並定製模型后處理(如:YOLO-Pose)](https://hackmd.io/@YungHuiHsu/rk41ISKY2) #### NvBufSurface NvBufSurface 是 NVIDIA DeepStream SDK 中的一個結構,用於表示經過解碼和處理的視訊幀的圖像資料 <div style="text-align: center;"> <figure> <img src="https://docs.nvidia.com/metropolis/deepstream/sdk-api/structNvBufSurface__coll__graph.png" alt="structNvBufSurface__coll__graph.png" width="500"> <figcaption>structNvBufSurface</figcaption> </figure> </div> :::spoiler NvBufSurface 的功能如下: * 維護視訊幀的元數據,如圖像的寬度、高度、畫素格式等。 * 提供了對視訊幀數據的訪問和操作接口,如讀取和寫入像素值、設置和獲取 ROI(Region of Interest)等。 * 支持對視訊幀數據進行硬件加速處理,如 GPU 轉換和編碼等。 ::: ##### code 範例(c++) :::spoiler code 範例(c++) ```c++ #include <nvbufsurftransform.h> void process_nvbufsurface(NvBufSurface *surface) { // 獲取視訊幀的元數據 int width = surface->surfaceList[0].width; int height = surface->surfaceList[0].height; int pitch = surface->surfaceList[0].pitch[0]; NvBufColorFormat colorFormat = surface->surfaceList[0].colorFormat; // 設置 ROI NvBufSurfTransformRect src_rect; src_rect.top = 0; src_rect.left = 0; src_rect.width = width; src_rect.height = height; NvBufSurfTransformRect dst_rect = src_rect; // 讀取像素值 unsigned char *buffer = surface->surfaceList[0].mappedAddr.addr[0]; for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { unsigned char pixel_value = buffer[row * pitch + col]; // 對像素值進行處理 // ... } } // 寫入像素值 for (int row = 0; row < height; row++) { for (int col = 0; col < width; col++) { buffer[row * pitch + col] = 255; // 將所有像素設置為白色 } } // 釋放視訊幀資源 NvBufSurfaceParams params; memset(&params, 0, sizeof(params)); params.gpuId = surface->surfaceList[0].gpuId; params.width = width; params.height = height; params.pitch = pitch; params.colorFormat = colorFormat; params.nvbuf_tag = surface->surfaceList[0].nvbuf_tag; nvbufsurface_dma_unmap(&surface->surfaceList[0], -1, -1); nvbufsurface_free(surface); } ``` ::: --- ## DeepStream效能優化的基本原則 :::spoiler ### DeepStream best practices 以下羅列幾項基本設置,更多請參考官方文件對於效能提升的提示[DeepStream best practices](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_ref_app_deepstream.html?highlight=batch%20processing#deepstream-best-practices) - 批次大小設置為等於輸入源(batch == sources) - streammux的高度和寬度設置為輸入解析度 - 如果從RTSP 從USB 傳輸,配置文件的[streammux]設置live-source=1,可以確保正確的時間戳記 - 視覺輸出(Tiling and visual output )會占用GPU資源。在不需要在屏幕上渲染輸出時,以下三個方法可以禁用以最大限度提高吞吐量 - 關閉OSD或屏幕顯示 - 在配置文件將[osd]參數中設置enable=0 - 平鋪器(tiler)為顯示輸出流創建了一個NxM網格 - 將 [tiled-display]參數中設置enable=0 - 關閉輸出接收器(output sink) - 將[sink]參數選擇fakesink,即type=1 ### Jetson optimization 確保Jetson時脈開至最大 ``` $ sudo nvpmodel -m <mode> --for MAX perf and power mode is 0 $ sudo jetson_clocks ``` ::: --- ## 參考資料 ### NVIDIA DeepStream官方文件 - [DeepStream SDK | NVIDIA Developer](https://developer.nvidia.com/deepstream-sdk) - 官方文件及載點 - docker [DeepStream-l4t](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/deepstream-l4t) - [範例檔案使用說明](https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Python_Sample_Apps.html) - 官方文件內有提供各種應用情境範例的.py檔及配置文件,包含串聯不同模型、multistream、結合Triton或直接使用本機TRT直接推論 ![](https://hackmd.io/_uploads/SkGTJ0FVh.png =600x) - 部署YOLO模型的文件配置[NVIDIA-AI-IOT/yolo_deepstream](https://github.com/NVIDIA-AI-IOT/yolo_deepstream/tree/main/deepstream_yolo) - [Building a Real-time Redaction App Using NVIDIA DeepStream, Part 2: Deployment | NVIDIA Technical Blog](https://developer.nvidia.com/blog/real-time-redaction-app-nvidia-deepstream-part-2-deployment/) #### NVIDIA 官方範例程式 - [NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications (github.com)](https://github.com/NVIDIA-AI-IOT/deepstream_python_apps) - 提供python接口的範例 - [NVIDIA Jetson Nano 2GB 系列文章(35):Python版test1实战说明](https://zhuanlan.zhihu.com/p/415054673) #### 其他不錯的DeepStream概念介紹 - [2022。Galliot。NVIDIA DeepStream Python Bindings; Customize your Applications](https://galliot.us/blog/deepstream-python-bindings-customization/) ![](https://hackmd.io/_uploads/By_1wFhSh.png =600x) Figure 2: A DeepStream Pipeline with two processing elements - [2021。Kavika Roy。Nvidia DeepStream — A Simplistic Guide](https://www.datatobiz.com/blog/nvidia-deepstream-guide/) #### NVIDIA 教學影片 ##### [2023/01。NVIDIA DeepStream Technical Deep Dive: DeepStream Inference Options with Triton & TensorRT](https://www.youtube.com/watch?v=eM4nKWy6anA) :::spoiler - outline 1. 使用DeepStream的推理選項來處理Tensorflow、Pytorch和ONNX模型。 2. 與TensorRT和DeepStream合作,進行優化模型。 3. 使用Triton服務器來支持單個或多個DeepStream管道。 4. 使用DeepStream的前/後處理插件。 ###### 帶有DS-TensorRT(gst-nvinfer)插件的DeepStream-app設置 - 應用程序和插件的配置文件 - DS-TRT首次將ONNX/TAO/Caffe模型在線轉化為TensorRT引擎文件 ![](https://hackmd.io/_uploads/H1Fd4twE2.png =600x) ###### DeepStream-app with DS-Triton (gst-nvinferserver) Server CAPI * 推理方法 1(本機): * ==Triton Server CAPI: 在單一程序(Process)中直接加載模型Repo== ![](https://hackmd.io/_uploads/ryBH_tD42.png =400x) ###### DeepStream-app with DS-Triton (gst-nvinferserver) gRPC Inference * 推理方法 2(遠端): * ==Triton gRPC Remote: 通過gRPC向遠程tritonserver-app發送INPUT並等待響應== ![](https://hackmd.io/_uploads/BJ-mdFPNn.png =400x) | | Inference Approach 1:<br> Triton Server CAPI | Inference Approach 2:<br> Triton gRPC Remote | |:---- |:----------------------------------------------------------------------:|:----------------------------------------------------------------------------- | | 優點 | - 直接在本機加載模型並使用,效能較佳<br>- 不需要通過網絡傳輸數據 | - 可以將模型部署在遠程服務器上(雲端)<br>- 可以使用Triton Server提供的所有特性 | | 缺點 | - 受限於單個進程中的記憶體和處理能力<br>- 不支持在遠程服務器上運行模型 | - 需要通過網絡傳輸數據,可能會影響推理性能<br>- 遠程服務器必須支持gRPC協議 | - 在NVIDIA Triton Inference Server中,"CAPI"和"gRPC"都是指不同的推理模式。 - CAPI - 代表“C API”。這是一種本地推理模式,其中應用程序直接使用Triton的C++推理庫來加載和運行模型。這種推理模式通常用於將推理服務嵌入到應用程序中,以便實現最佳性能和低延遲。在這種模式下,模型存儲庫直接加載到應用程序進程中,因此可以避免網絡傳輸和通信開銷。 - gRPC - 代表“general-purpose Remote Procedure Call”。這是一種遠程推理模式,其中客戶端通過網絡連接到遠程Triton推理服務器,使用gRPC協議發送輸入數據,並等待服務器返回推理結果。在這種模式下,應用程序不需要在本地加載模型,因為模型存儲庫是由Triton推理服務器加載和管理的。這種模式通常用於客戶端和服務器之間的跨網絡推理,例如在雲上運行的推理服務。 ###### 推理前的DeepStream批(Batching)處理 * `nvstreammux`在推理前將所有的輸入數據流分批輸入 在推理前一起進行 * `nvstreammux`的批處理策略適用於兩個推理插件 * DS-Triton(gst-nvinferserver)插件 * DS-TensorRT (gst-nvinfer) 插件 * 通過配置文件設置批量(batch)大小 ![](https://hackmd.io/_uploads/Hy23jYvE2.png =600x) ![](https://hackmd.io/_uploads/ByZghKPVh.png =800x) ###### 解析推理數據的DeepStream示例應用程序 ![](https://hackmd.io/_uploads/HybTntwN2.png =900x) ![](https://hackmd.io/_uploads/HyBBCYD42.png =500x) ###### DeepStream Triton推理數據流 - ![](https://hackmd.io/_uploads/r1gdEycw4n.png) ::: ##### [2022/06。NVIDIA DeepStream Technical Deep Dive : Multi-Object Tracker](https://www.youtube.com/watch?v=4nV-GtqggEw) ##### [2021/01。Implementing Real-time Vision AI Apps Using NVIDIA DeepStream SDK](https://www.youtube.com/watch?v=hSegX0P170s) ### C++ python bindings #### [python-bindings-overview](https://realpython.com/python-bindings-overview/) - 中文版 [Python Bindings - 从 Python 调用 C/C++](https://zhuanlan.zhihu.com/p/143356193) - [给 Python 算法插上性能的翅膀——pybind11 落地实践](https://zhuanlan.zhihu.com/p/444805518) - [Deepstream Python官方範例解說](https://blog.csdn.net/zyctimes/article/details/122601921) ### YOLO - DeepStream 以下幾個倉儲有提供C++的範例程式,不過如果要結合PYTHON BINDING的範例程式執行,要手動修改的部分還蠻複雜的 #### GITHUB - [NVIDIA-AI-IOT/yolo_deepstream](https://github.com/NVIDIA-AI-IOT/yolo_deepstream) - 官方範例檔 - 提供C++的範例程式 - [marcoslucianops/DeepStream-Yolo](https://github.com/marcoslucianops/DeepStream-Yolo) - 非官方範例,有提供YOLO系列架構支援 - 提供C++的範例程式 - 說明與支援較完整 - [visualcortex-official/yolov7-deepstream](https://github.com/visualcortex-official/yolov7-deepstream) - 提供整合`efficientNMS`外掛的支援 #### Forum - [Tutorial: How to run YOLOv7 on Deepstream](https://forums.developer.nvidia.com/t/tutorial-how-to-run-yolov7-on-deepstream/229045) - 有討論到如何在多媒體串流範例程式上如何修改 deepstream_python_apps/apps/deepstream-rtsp-in-rtsp-out at master · NVIDIA-AI-IOT/deepstream_python_apps - [Deepstream python app yolov7 integration issue](https://forums.developer.nvidia.com/t/deepstream-python-app-yolov7-integration-issue/246034) - `python3 deepstream_test_1.py /opt/nvidia/deepstream/deepstream-6.2/samples/streams/sample_720p.h264` :::spoiler > - dstest1_pgie_config.txt → from config_infer_primary_yoloV7.txt 1 under yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub > - nvdsinfer_custom_impl_Yolo 1 → from nvdsinfer_custom_impl_Yolo 1 under yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub > - labels.txt → from labels.txt under yolo_deepstream/deepstream_yolo at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub > - yolov7.onnx → from yolov7.onnx under yolo_deepstream/yolov7_qat at main · NVIDIA-AI-IOT/yolo_deepstream · GitHub ::: #### GStreamer - [GStreamer 簡介與筆記](https://hackmd.io/@YungHuiHsu/ryhRTZpt3) DeepStream的功能是建基於GStreamer上,所以在使用前最好對後者的原理有點概念會比較好修改 [gstreamer/documentation](https://gstreamer.freedesktop.org/documentation/) #### [Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK](https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/)

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully