# How to use Deepstream ## What is Deepstream Deepstream 是 Nvidia 基於 Gstreamer 並整合了許多 Nvidia 開發的 GPU 加速套件所開發出的端到端串流分析套件,讓使用者可以利用 Deepstream 輕鬆完成自己的串流分析管線,因為基於 Gstreamer 所以可以輕鬆替換串流分析管線中的任一個部份,大幅降低各功能間的耦合,整合 Tensorrt 與 DLA 使使用者可以輕鬆加速自己的神經網路演算法,Nvidia 還在 Deepstream 中整合了許多好用的功能,例如 Tracking 等等,另外還可輕鬆存取 GPU 上的各種加速單元來加速串流的編碼與解碼,透過這樣的整合使用者資料如何在 CPU 與 GPU 中的互動,大幅減少因為 Copy 所造成的性能耗損。 ## Why to use Deepstream 1. 詳細且多樣的範例參考與學習 2. 開箱即用 3. 完整且耦合度低的串流分析管線 4. 深度整合 GPU 讓使用者可輕鬆加速串流編碼解碼與神經路推理 ## Deepstream Example ### Deepstream-app #### 環境設定 deepstream 5.1 ``` xhost + nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel ``` #### 環境設定 deepstream 6.0 ``` xhost + nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0-devel ``` 基本上一開始在跑 Deepstream 的範例都是基於他們自帶的範例,接下來先看一下 Deepstream 的目錄,基本上我們只要關注 samples 與 sources 這兩個目錄,以下再分別介紹一下這兩個目錄的內容 > [name=Eric Huang]以下的內容主要以 deepstream-5.1 為主 ``` cd /opt/nvidia/deepstream/deepstream-5.1/ ls -al . |-- bin |-- lib | |-- gst-plugins | `-- libv4l |-- samples | |-- configs | |-- models | |-- streams | `-- trtis_model_repo `-- sources |-- SONYCAudioClassifier |-- apps |-- gst-plugins |-- includes |-- libs |-- objectDetector_FasterRCNN |-- objectDetector_SSD |-- objectDetector_Yolo `-- tools 19 directories ``` Deepstream 的範例大多放在 configs/deepstream-app 這個目錄,另外 streams 這個目錄裡都是放測試的影像檔,這邊的範例大多可以直接跑 ``` cd samples tree -d -L 3 . |-- configs | |-- deepstream-app | | |-- config_infer_primary.txt | | |-- config_infer_primary_endv.txt | | |-- config_infer_primary_nano.txt | | |-- config_infer_secondary_carcolor.txt | | |-- config_infer_secondary_carmake.txt | | |-- config_infer_secondary_vehicletypes.txt | | |-- config_mux_source30.txt | | |-- config_mux_source4.txt | | |-- iou_config.txt | | |-- source1_usb_dec_infer_resnet_int8.txt | | |-- source30_1080p_dec_infer-resnet_tiled_display_int8.txt | | |-- source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt | | |-- source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt | | `-- tracker_config.yml | |-- deepstream-app-trtis | `-- tlt_pretrained_models |-- models | |-- Primary_Detector | |-- Primary_Detector_Nano | |-- SONYC_Audio_Classifier | |-- Secondary_CarColor | |-- Secondary_CarMake | |-- Secondary_VehicleTypes | `-- Segmentation | |-- industrial | `-- semantic |-- streams `-- trtis_model_repo |-- Primary_Detector |-- Secondary_CarColor |-- Secondary_CarMake |-- Secondary_VehicleTypes |-- Segmentation_Industrial |-- Segmentation_Semantic |-- densenet_onnx |-- inception_graphdef |-- mobilenet_v1 |-- ssd_inception_v2_coco_2018_01_28 `-- ssd_mobilenet_v1_coco_2018_01_28 27 directories ``` 另外在 sources 裡還有許多其他範例,但這邊的範例跑之前大多需要比較多的前置作業,這邊的範例後續再詳細介紹 ``` cd sources tree -d -L 3 . |-- SONYCAudioClassifier | `-- gstnvinferaudio_custom_parser |-- apps | |-- apps-common | | |-- includes | | `-- src | `-- sample_apps | |-- deepstream-app | |-- deepstream-appsrc-test | |-- deepstream-audio | |-- deepstream-dewarper-test | |-- deepstream-gst-metadata-test | |-- deepstream-image-decode-test | |-- deepstream-image-meta-test | |-- deepstream-infer-tensor-meta-test | |-- deepstream-mrcnn-app | |-- deepstream-nvdsanalytics-test | |-- deepstream-nvof-test | |-- deepstream-opencv-test | |-- deepstream-perf-demo | |-- deepstream-segmentation-test | |-- deepstream-test1 | |-- deepstream-test2 | |-- deepstream-test3 | |-- deepstream-test4 | |-- deepstream-test5 | |-- deepstream-testsr | |-- deepstream-transfer-learning-app | `-- deepstream-user-metadata-test |-- gst-plugins | |-- gst-dsexample | | `-- dsexample_lib | |-- gst-nvdsaudiotemplate | | |-- customlib_impl | | `-- includes | |-- gst-nvdsosd | |-- gst-nvdsvideotemplate | | |-- customlib_impl | | `-- includes | |-- gst-nvinfer | |-- gst-nvmsgbroker | `-- gst-nvmsgconv |-- includes | `-- nvdsinferserver |-- libs | |-- amqp_protocol_adaptor | |-- azure_protocol_adaptor | | |-- device_client | | `-- module_client | |-- kafka_protocol_adaptor | |-- nvdsinfer | |-- nvdsinfer_customparser | |-- nvmsgbroker | |-- nvmsgconv | |-- nvmsgconv_audio | `-- redis_protocol_adaptor |-- objectDetector_FasterRCNN | `-- nvdsinfer_custom_impl_fasterRCNN |-- objectDetector_SSD | `-- nvdsinfer_custom_impl_ssd |-- objectDetector_Yolo | `-- nvdsinfer_custom_impl_Yolo `-- tools `-- nvds_logger 64 directories ``` #### 測試範例 ``` cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app/ tree . |-- config_infer_primary.txt |-- config_infer_primary_endv.txt |-- config_infer_primary_nano.txt |-- config_infer_secondary_carcolor.txt |-- config_infer_secondary_carmake.txt |-- config_infer_secondary_vehicletypes.txt |-- config_mux_source30.txt |-- config_mux_source4.txt |-- iou_config.txt |-- source1_usb_dec_infer_resnet_int8.txt |-- source30_1080p_dec_infer-resnet_tiled_display_int8.txt |-- source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt |-- source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt `-- tracker_config.yml ``` 這邊總共有四個範例,以下分別簡單的介紹這四個範例的功能,每一個範例的詳細內容之後在另開一篇來講 | 範例 | 功能 | | -------- | -------- | | source1_usb_dec_infer_resnet_int8.txt | usb cam 作為輸入,使用 resnet 並將模型轉為 int8 的 Tensorrt 引擎進行辨識 | | source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt | 讀取單一影片作為輸入並複製 4 份,使用 resnet 並將模型轉為 int8 的 Tensorrt 引擎進行辨識與追蹤 | | source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt | 讀取單一影片作為輸入並複製 4 份,使用 resnet 並將模型轉為 int8 的 Tensorrt 引擎進行辨識與追蹤,指定在 GPU1 上執行 | | source30_1080p_dec_infer-resnet_tiled_display_int8.txt | 讀取單一影片作為輸入並複製 30 份,使用 resnet 並將模型轉為 int8 的 Tensorrt 引擎 | 以下是執行每個範例的指令 1. source1_usb_dec_infer_resnet_int8 ``` nvidia-docker run -it --rm --device /dev/video0 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app deepstream-app -c source1_usb_dec_infer_resnet_int8.txt ```  2. source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt ``` nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app deepstream-app -c source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt ```  3. source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt 基本上這個範例的內容與上一個範例相同,唯一不同的是 config 檔中的 gpu-id 都是指定在 GPU1 上,但我的電腦目前沒有雙顯卡所以這邊就沒辦法測試 ``` diff source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8_gpu1.txt 34c34 < gpu-id=0 --- > gpu-id=1 48,49c48 < #drop-frame-interval=2 < gpu-id=0 --- > gpu-id=1 66d64 < #Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 71c69 < codec=1 --- > codec=3 75d72 < #iframeinterval=10 81a79 > gpu-id=1 92,93c90 < #iframeinterval=10 < bitrate=400000 --- > bitrate=4000000 99a97 > gpu-id=1 103c101 < gpu-id=0 --- > gpu-id=1 117c115 < gpu-id=0 --- > gpu-id=1 140,141c138,139 < gpu-id=0 < model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu0_int8.engine --- > gpu-id=1 > model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b4_gpu1_int8.engine 143d140 < #Required by the app for OSD, not a plugin property 164c161 < gpu-id=0 --- > gpu-id=1 172,173c169,170 < model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine < gpu-id=0 --- > model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu1_int8.engine > gpu-id=1 182c179 < model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine --- > model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_gpu1_int8.engine 184c181 < gpu-id=0 --- > gpu-id=1 192c189 < model-engine-file=../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine --- > model-engine-file=../../models/Secondary_CarMake/resnet18.caffemodel_b16_gpu1_int8.engine 194c191 < gpu-id=0 --- > gpu-id=1 ``` 4. source30_1080p_dec_infer-resnet_tiled_display_int8.txt ``` nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/deepstream-app deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt ```  接下來介紹 sources 裡的範例 ``` cd /opt/nvidia/deepstream/deepstream-5.1/sources tree -d -L 2 . |-- SONYCAudioClassifier | `-- gstnvinferaudio_custom_parser |-- apps | |-- apps-common | `-- sample_apps |-- gst-plugins | |-- gst-dsexample | |-- gst-nvdsaudiotemplate | |-- gst-nvdsosd | |-- gst-nvdsvideotemplate | |-- gst-nvinfer | |-- gst-nvmsgbroker | `-- gst-nvmsgconv |-- includes | `-- nvdsinferserver |-- libs | |-- amqp_protocol_adaptor | |-- azure_protocol_adaptor | |-- kafka_protocol_adaptor | |-- nvdsinfer | |-- nvdsinfer_customparser | |-- nvmsgbroker | |-- nvmsgconv | |-- nvmsgconv_audio | `-- redis_protocol_adaptor |-- objectDetector_FasterRCNN | `-- nvdsinfer_custom_impl_fasterRCNN |-- objectDetector_SSD | `-- nvdsinfer_custom_impl_ssd |-- objectDetector_Yolo | `-- nvdsinfer_custom_impl_Yolo `-- tools `-- nvds_logger 33 directories ``` 在這邊我們主要關注幾個目錄,以下會各別做介紹 | 範例 | 功能 | | ----------------------------- | -------- | | objectDetector_FasterRCNN | 讀取單一影片作為輸入,並利用 FasterRCNN 轉成 Tensorrt 引擎後對輸入做辨識與追蹤 | | objectDetector_SSD | 讀取單一影片作為輸入,並利用 SSD 轉成 Tensorrt 引擎後對輸入做辨識與追蹤 | | objectDetector_Yolo | 讀取單一影片作為輸入,並利用 YOLO 轉成 Tensorrt 引擎後對輸入做辨識與追蹤 | 1. objectDetector_FasterRCNN ``` nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_FasterRCNN apt install wget wget --no-check-certificate https://dl.dropboxusercontent.com/s/o6ii098bu51d139/faster_rcnn_models.tgz?dl=0 -O faster-rcnn.tgz tar zxvf faster-rcnn.tgz -C . --strip-components=1 --exclude=ZF_* ``` 回到本機,並到 https://developer.nvidia.com/nvidia-tensorrt-7x-download 下載 TensorRT 7.2.3 for Ubuntu 18.04 and CUDA 11.1 & 11.2 TAR package 這個項目,解壓縮後找到 data/faster-rcnn/faster_rcnn_test_iplugin.prototxt,並用以下指令將它複製到 docker 裡 查詢目前 docker 的 id ``` docker ps -a ```  用以下指令並帶入該 docker 的 id 將檔案複製進 docker 的指定目錄裡 ``` docker cp ./faster_rcnn_test_iplugin.prototxt 47fc94c20315:/opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_FasterRCNN ``` 接下來使用以下指令確認 CUDA 的版本 ``` deepstream-app --version-all ```  接下來先編譯 Tensorrt 的客製化層,這個是將 FasterRCNN 轉換成 Tensorrt 時用的 ``` export CUDA_VER=11.1 make -C nvdsinfer_custom_impl_fasterRCNN/ ``` 終於可以來跑該範例了 ``` deepstream-app -c deepstream_app_config_fasterRCNN.txt ```  2. objectDetector_SSD ``` nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_SSD apt update && apt install python3 python3-dev python3-pip wget -y pip3 install tensorflow-gpu wget http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2017_11_17.tar.gz tar xvfz ssd_inception_v2_coco_2017_11_17.tar.gz ``` 回到本機,並到 https://developer.nvidia.com/nvidia-tensorrt-7x-download 下載 TensorRT 7.2.3 for Ubuntu 18.04 and CUDA 11.1 & 11.2 TAR package 這個項目,解壓縮後找到 TensorRT-7.2.3.4 這個目錄,並用以下指令將它複製到 docker 裡 查詢目前 docker 的 id ``` docker ps -a ```  用以下指令並帶入該 docker 的 id 將檔案複製進 docker 的指定目錄裡 ``` docker cp ./TensorRT-7.2.3.4 47fc94c20315:/usr/src/ ``` 回到 docker 中,並用以下指令來轉換該 SSD 的模型到 Tensorrt ``` cd ssd_inception_v2_coco_2017_11_17 python3 /usr/lib/python2.7/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -O NMS -p /usr/src/TensorRT-7.2.3.4/samples/sampleUffSSD/config.py -o sample_ssd_relu6.uff cp sample_ssd_relu6.uff .. cd .. cp /usr/src/TensorRT-7.2.3.4/data/ssd/ssd_coco_labels.txt ./ ``` 接下來使用以下指令確認 CUDA 的版本 ``` deepstream-app --version-all ```  接下來先編譯 Tensorrt 的客製化層,這個是將 FasterRCNN 轉換成 Tensorrt 時用的 ``` export CUDA_VER=11.1 make -C nvdsinfer_custom_impl_ssd/ ``` 終於可以來跑該範例了 ``` deepstream-app -c deepstream_app_config_ssd.txt ```  3. objectDetector_Yolo ``` nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel cd /opt/nvidia/deepstream/deepstream-5.1/sources/objectDetector_Yolo apt install wget -y ./prebuild.sh export CUDA_VER=11.1 make -C nvdsinfer_custom_impl_Yolo deepstream-app -c deepstream_app_config_yoloV2.txt deepstream-app -c deepstream_app_config_yoloV2_tiny.txt deepstream-app -c deepstream_app_config_yoloV3.txt deepstream-app -c deepstream_app_config_yoloV3_tiny.txt ``` 或是可以用 gstreamer 的指令測試 ``` gst-launch-1.0 filesrc location=../../samples/streams/sample_1080p_h264.mp4 ! decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=config_infer_primary_yoloV3_tiny.txt ! nvvideoconvert ! nvdsosd ! nveglglessink ```  ### Run Deepstream Python Example #### 環境設定 ##### Deepstream 5.1 ``` xhost + nvidia-docker run -it --rm --device /dev/video0 -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-5.1 nvcr.io/nvidia/deepstream:5.1-21.02-devel git config --global http.sslVerify false apt update && apt install python3-gi python3-dev python3-gst-1.0 python-gi-dev git python-dev python3 python3-pip cmake g++ build-essential libglib2.0-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake -y export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0" export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include" git clone https://github.com/GStreamer/gst-python.git cd gst-python git checkout 1a8f48a ./autogen.sh PYTHON=python3 libtoolize --automake --copy --debug --force ./configure PYTHON=python3 make make install cd ../sources/ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git cd deepstream_python_apps/ git checkout v1.0.3 ``` ##### Deepstream 6.0 ``` xhost + nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /opt/nvidia/deepstream/deepstream-6.0 nvcr.io/nvidia/deepstream:6.0-devel git config --global http.sslVerify false apt update && apt install python3-gi python3-dev python3-gst-1.0 python-gi-dev git python-dev python3 python3-pip cmake g++ build-essential libglib2.0-dev libglib2.0-dev-bin python-gi-dev libtool m4 autoconf automake -y export GST_LIBS="-lgstreamer-1.0 -lgobject-2.0 -lglib-2.0" export GST_CFLAGS="-pthread -I/usr/include/gstreamer-1.0 -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include" cd ../sources/ git clone https://github.com/NVIDIA-AI-IOT/deepstream_python_apps.git cd deepstream_python_apps/ git submodule update --init cd 3rdparty/gst-python/ ./autogen.sh make && make install cd ../../bindings mkdir build && cd build cmake .. -DPYTHON_MAJOR_VERSION=3 -DPYTHON_MINOR_VERSION=8 make pip3 install pyds-1.1.0-py3-none-linux_x86_64.whl ``` #### 測試範例 ``` cd /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/ cd apps/deepstream-test1 python3 deepstream_test_1.py ../../../../samples/streams/sample_720p.h264 cd ../deepstream-test2 python3 deepstream_test_2.py ../../../../samples/streams/sample_720p.h264 cd ../deepstream-test1-usbcam python3 deepstream_test_1_usb.py /dev/video0 cd ../deepstream-test3 python3 deepstream_test_3.py file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_720p.h264 file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 cd ../deepstream-test4 python3 deepstream_test_4.py -i ../../../../samples/streams/sample_720p.h264 -p ../../../../lib/libnvds_kafka_proto.so --conn-str="192.168.1.176;9092;ObjDetect" ```   ###### tags: `Deepstream`
×
Sign in
Email
Password
Forgot password
or
Sign in via Google
Sign in via Facebook
Sign in via X(Twitter)
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
Continue with a different method
New to HackMD?
Sign up
By signing in, you agree to our
terms of service
.