YH Hsu
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    2
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Deploy YOLOv7 on Nvidia Jetson ###### tags: `Edge AI` `Deployment` `Nvidia` `Jetson` `YOLOv7` `AGX Xavier` ![](https://i.imgur.com/S5QFcnE.png =800x) ## NVIDIA Jetson 平台部署相關筆記 ### 基本環境設定 - [Jetson AGX Xavier 系統環境設定1_在windows10環境連接與安裝](https://hackmd.io/@YungHuiHsu/HJ2lcU4Rj) - [Jetson AGX Xavier 系統環境設定2_Docker安裝或從源程式碼編譯](https://hackmd.io/k-lnDTxVQDWo_V13WEnfOg) - [NVIDIA Container Toolkit 安裝筆記](https://hackmd.io/wADvyemZRDOeEduJXA9X7g) - [Jetson 邊緣裝置查詢系統性能指令jtop](https://hackmd.io/VXXV3T5GRIKi6ap8SkR-tg) - [Jetson Network Setup 網路設定](https://hackmd.io/WiqAB7pLSpm2863N2ISGXQ) - [OpenCV turns on cuda acceleration in Nvidia Jetson platform<br>OpenCV在Nvidia Jetson平台開啟cuda加速](https://hackmd.io/6IloyiWMQ_qbIpIE_c_1GA) ### 模型部署與加速 - [[Deployment] AI模型部屬入門相關筆記](https://hackmd.io/G80HMJRmSwaaLD8W1PHUPg) - [[Object Detection_YOLO] YOLOv7 論文筆記](https://hackmd.io/xhLeIsoSToW0jL61QRWDcQ) - [Deploy YOLOv7 on Nvidia Jetson](https://hackmd.io/kZftj6AgQmWJsbXsswIwEQ) - [Convert PyTorch model to TensorRT for 3-8x speedup<br>將PyTorch模型轉換為TensorRT,實現3-8倍加速](https://hackmd.io/_oaJhYNqTvyL_h01X1Fdmw?both) - [Accelerate multi-streaming cameras with DeepStream and deploy custom (YOLO) models<br>使用DeepStream加速多串流攝影機並部署客製(YOLO)模型](https://hackmd.io/@YungHuiHsu/rJKx-tv4h) - [Use Deepstream python API to extract the model output tensor and customize model post-processing (e.g., YOLO-Pose)<br>使用Deepstream python API提取模型輸出張量並定製模型后處理(如:YOLO-Pose)](https://hackmd.io/@YungHuiHsu/rk41ISKY2) - [Model Quantization Note 模型量化筆記](https://hackmd.io/riYLcrp1RuKHpVI22oEAXA) --- ## 編譯環境選擇 ### TensorRT 官方文件建議的部屬編譯環境選擇[Quick Start Guide :: NVIDIA Deep Learning TensorRT Documentation](https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html) 根據[官網NVIDIA L4T TensorRT](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-tensorrt)介紹,NVIDIA TensorRT 是一個 C++ 的library,可以在 NVIDIA GPU 上實現高性能的推理。TensorRT 可以對訓練好的網絡進行優化,並產生一個運行時引擎。並支持 C++ 和 Python API,可以用來定義或導入網絡模型。TensorRT 還會利用各種優化技術和高效的核函數,找到最快的模型實現方式。 - 使用Python API雖然運行速度略慢於C++ API,但更利於調整與測試,也比TF-TRT性能更高(別用TensorFlow了) ![](https://i.imgur.com/CZnN6i4.png =400x) ![](https://i.imgur.com/RAcyzjZ.png =400x) :::warning - 選項一:直接讀取pytorch模型,在python環境執行 - YOLOv7官網建議的pytorch版本 # torch>=1.7.0,!=1.12.0 - pytorch與torchvision安裝 - 不能直接使用YOLOv7 requirements.txt文件內的版本,需要使用Nvidia 給ARM aarch64 架構的版本(linux_aarch64.whl),可在[PyTorch for Jetson](https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048)中找到 - 選項二: 以TensorRT進行模型優化與加速,在C++環境執行 - 將模型轉檔為.trt格式,在C++環境中編譯 - YOLOv7在2023.03時,官網已經有支援輕鬆轉換為.trt檔的code(見`export.py`) ::: #### 使用Python API 操作範例見[Quick Start Guide/ 6.1.2. Exporting to ONNX from PyTorch](https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#export-from-pytorch)、[Using PyTorch with TensorRT through ONNX](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb),可以完全在python環境下操控TensorRT,將模型轉為.trt格式的TensorRT engine,達到易用(python API)與高效(C++ Runtime)的平衡。讓模型的前後處理可以完全在python環境中運行 ps: 可搭配YOLOv7官網的範例[YOLOv7TRT.ipynb/](https://colab.research.google.com/gist/AlexeyAB/fcb47ae544cf284eb24d8ad8e880d45c/yolov7trtlinaom.ipynb#scrollTo=nIjoHQE2_V2g)理解,已經寫成`class BaseEngine()`可以整包拿去用 ![](https://i.imgur.com/3SfeNHA.png =300x) ## Build docker environment(Setup Inference Server) 待補... --- # 方案一:按照Hello AI World指引安裝PyTorch_arm64,直接以python執行 參考了網路上幾個方案,大多是直接git clone YOLOv7的倉儲後,直接在python環境以.pth檔執行,這大概是最快最簡單可運作的方式,不過就沒有用到TensorRT加速的優勢,也可惜了Jetpack已經安裝的一堆相關加速套件(超級肥大呀) 雖然說是最簡單可行,但環境設定過程還是一堆雷區,而且每個套件安裝都相當複雜,最簡單的方式是跟著[Hello AI World](https://github.com/dusty-nv/jetson-inference)直接建立可以運作的推論環境,包含pytorch_arm64的安裝。或直接使用Nvidia已建立好的docker環境(見[Running the Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md)文件說明,可以快速建立相對簡單的JETSON推論環境) ## 編譯環境 pytroch_arm64環境編譯流程示意(最右邊) ![](https://i.imgur.com/wVgOueO.png =400x) :::warning 檢視python環境目前並沒有安裝`tensorrt` `pycuda`這兩個python api所需模組,實際並未用到TensorRT而是完全單純使用Pytorch_arm ::: ## 安裝過程與設定 前置環境設定完全參照[Hello AI World](https://github.com/dusty-nv/jetson-inference) - 硬體: AGX Xavier(32G) - 安裝版本 - Jetpack 5.1 * L4T 35.2.1 * includes Linux Kernel 5.10, an Ubuntu 20.04 based root file system * TensorRT 8.5.2 * cuDNN 8.6.0 * CUDA 11.4.19 ![](https://hackmd.io/_uploads/H1ndEe0Sn.png =600x) ### 編譯環境 - 因為儲存空間不足沒有使用docker,而採用直接從源碼編譯的方案[Building the Project from Source](https://github.com/dusty-nv/jetson-inference/blob/master/docs/building-repo-2.md) :::info 依照欲執行的編譯環境與硬體設備,環境準備分為以下幾部分考量 1. Prepare Environment for Jetson - Jetson機器硬體部分 - ARM aarch64 架構 - CUDA、cuDNN、L4T、Jetpack版本 2. (Prepare Environment for TensorRT) - 如果要以TensorRT優化的話,需另外準備TensorRT所需環境,不過在Jetpack安裝時已經連帶安裝好 前兩項在Jetpack安裝時已經一併完成,只有最後兩項要額外處理 3. (install Pytorch) - 如果要在pytorch環境運行的話 - 需要arm64版本 4. Prepare Environment for YOLOv7 - 執行Yolov7所需的libirary ::: ### Pytorch 版本與安裝 ![](https://i.imgur.com/LGE6aiR.png) 這裡安裝的是Nvidia為了ARM aarch64架構再製過的版本,檔案結尾是是linux_aarch64.whl - 根據Hello AI World指引,以下列指令安裝 ``` cd jetson-inference/build ./install-pytorch.sh ``` - `install-pytorch.sh` 會根據系統環境配置選擇適合的pytorch版本,會自動選擇pytorch v1.12 for python 3 - 根據[PyTorch for Jetson Platform](https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel) 理論上Jetpack 5.1環境只能搭載 pytorch 1.14...,不過實際安裝後是可執行 - 安裝的編譯過程約要等待約3-4小時 安裝完成後可以`python -c "import torch; print(torch.__version__)"`檢查版本,我這邊看到的是 ``` torch 1.12.0a0+2c916ef.nv22.3 torchvision 0.12.0a0+9b5a3fe ``` :::info - pytorch官網範例[Running PyTorch Models on Jetson Nano](https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/) 如果想手動安裝pytorch可見[Installing PyTorch for Jetson Platform](https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html) 建議使用預先建立的[l4t-pytorch](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) 和[l4t-ml](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-ml) 容器鏡像和[Dockerfiles](https://github.com/dusty-nv/jetson-containers) ::: ### YOLOv7安裝 - 下載YOLOv7,並進入`yolov7`內 我是在Home目錄下新增`yolo`資料夾 ``` mkdir yolo cd yolo git clone https://github.com/WongKinYiu/yolov7 cd yolov7 ``` - 修改需安裝的套件`requirements.txt` 需要另外安裝nano 或著可把nano換成vim/vi指令 ``` nano requirements.txt ``` - 以"#"註解掉下列項目 - opencv-python已經在Building the Project from Source過程中安裝 - 安裝的是`opencv-python 4.7.0.72` - Jetpack 5.1也預安裝了4.5.4版,可用`dpkg -l | grep libopencv`檢視 ![](https://i.imgur.com/cZZnMo5.png =800x) - 如果這邊沒註解掉opencv-python安裝的話,可能會造成環境混亂調用失敗 - Pytorch 與 Torchvision則在上一部自行安裝 Nvidia的ARM aarch64版本 ``` # opencv-python>=4.1.1 . # torch>=1.7.0,!=1.12.0 # torchvision>=0.8.1,!=0.13.0 . # thop # FLOPs computation # --> LOCATED ON THE BOTTOM OF requirements.txt ``` 修改完`requirements.txt`後執行 ``` pip3 install -r requirements.txt ``` :::info [2022.12。Deploy YOLOv7 to Jetson Nano for Object Detection](https://www.hackster.io/spehj/deploy-yolov7-to-jetson-nano-for-object-detection-6728c3)提到: > Because OpenCV has to be installed system-wide (it comes preinstalled with Nvidia developer pack Ubuntu 18.04), we have to create a symbolic link from global to our virtual environment. Otherwise, we won't be able to access it from our virtual environment. 因此做了symbolic link從虛擬環境連結到全域環境,由於目前設定沒有建立虛擬環境,因此跳過這部 ::: :::warning 在執行過程會導至import pandas時出現 importError 問題,其實在安裝過程就會跳出pandas版本與python-dateutil(需要2.8.1版以上,但安裝版本為2.7.x版)不相容的提示 解決方法: 執行 `pip3 install --upgrade python-dateutil` 升級到2.8.2版 ::: ### 測試YOLOv7 - 下載需要的模型權重 例如 ``` # Download tiny weights wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7-tiny.pt # Download regular weights wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt ``` - 執行範例檔 - 如果想用webcamera,--source 後面的輸入改為`0`即可 - 每次 - 以`yolov7-tiny.pt`測試,每張圖片推論時間大約在20-25 ms ``` python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg ``` 終於成功 可喜可賀!!!:100: ![](https://i.imgur.com/Bx9gXig.png =400x) ![](https://i.imgur.com/rtcw5Yw.png =400x) #### 各版本及硬體環境推論時間效能 - YOLOv7不同版本比對 - 對比在v100的單張圖片,tiny推論時間在3.5ms左右、最大的E6E為27.7ms(~8X) ![](https://i.imgur.com/Y1frFwu.png =600x) - 不同硬體 - AGX Xavier 大約是NX的1.5倍,如果用TensorRT FP16優化,應該還有很大進步空間 ![](https://i.imgur.com/dZbXfyL.png =200x) source: [2022.12。YOLOv7 部署到 TensorRT(C++ )](https://zhuanlan.zhihu.com/p/556570703) ![](https://i.imgur.com/pB2o9dn.png =400x) ![](https://i.imgur.com/mji8mx7.png) ## todo - [x] modify script to test YOLO Pose #### [Github/yolov7/branch:detect_pose](https://github.com/YunghuiHsu/yolov7) `detect_pose.py` ``` python detect_pose.py --weights yolov7-w6-pose.pt --conf 0.05 --iou-thres 0.45 --img-size 1280 --source inference/images/ --no-trace ``` ![](https://i.imgur.com/x8OHSTH.jpg =400x) - Feature:計算推論效能 - branch: log_metric ![](https://i.imgur.com/EhnKpvg.png =300x) #### Modify script to get FPS value - modify `detect.py` ```python=85 # Inference t1 = time_synchronized() with torch.no_grad(): # Calculating gradients would cause a GPU memory leak pred = model(img, augment=opt.augment)[0] t2 = time_synchronized() # Apply NMS pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) t3 = time_synchronized() ``` ```python=131 # Print time (inference + NMS) time_process = f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS' time_process += f' | FPS : { 1/(t3-t1): .1f} , Latency : {1E3 * (t3-t1):.1f}ms' print(time_process) ``` - test with yolov7-tiny, img-size 640, horses.jpg - `python3 detect.py --weights yolov7-tiny.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg - (22.8ms) Inference, (5.2ms) NMS | FPS : 35.7 , Latency : 28.0ms ![](https://i.imgur.com/vjjMLfT.png =600x) # 方案二: 以Python API調用TensorRT進行加速 環境:沿用"測試一"的硬體與環境 - 詳見[Convert PyTorch model to TensorRT for 3-6x speedup<br>將PyTorch模型轉換為TensorRT,實現3-6倍加速](https://hackmd.io/_oaJhYNqTvyL_h01X1Fdmw?both) - Pytorch > ONNX > TensorRT(include NMS) - AGX Xavier test with 5 images | model | size | NMS | .pt | .trt(fp16) | speedup | |:-------------:|:----:|:---:|:----:| ----- |:-------:| | yolov7-tiny | 640 | v | 24.0 | 145.5 | 6.1x | | yolov7 | 640 | v | 14.8 | 40.2 | 2.7x | | yolov7 | 1280 | v | 7.1 | 11.5 | 1.6x | ![](https://i.imgur.com/CCN5O8z.png =400x) :::info 物件偵測模型相當耗時的後處理-NMS計算,已有人已經寫好外掛,可納入TRT推理引擎中C++環境執行 ::: # 方案三: TensorRT結合DeepStream處理串流加速 - 詳見[Accelerate multi-streaming cameras with DeepStream and deploy YOLO series models<br>使用DeepStream加速多串流攝影機並佈署佈署YOLO系列模型](https://hackmd.io/@YungHuiHsu/rJKx-tv4h) ![](https://hackmd.io/_uploads/SJKGP3Er3.png) 如果佈署環境需要同時串接多台攝影機,那麼就有必要使用DeepStream針對多媒體串流處理部分加速,透過調用CUDA與共享記憶體等方式,加速資料輸入與輸出端的處理 下表為YOLOv7透過DeepStream加速批次處理串流資料的數據 ![](https://hackmd.io/_uploads/Syvsr2Erh.png =600x) (source:[NVIDIA-AI-IOT/yolo_deepstream](https://github.com/NVIDIA-AI-IOT/yolo_deepstream)) --- # Reference ### PyTorch Models on Jetson #### [2022.03。Running PyTorch Models on Jetson Nano](https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/) - [x] 直接使用.pt。未使用TensorRT加速 - 最簡單直接的方案,範例是YOLOv5,可直接替換為v7 - 如欲使用TensorRT,參考[Using PyTorch with TensorRT through ONNX:](https://github.com/NVIDIA/TensorRT/blob/master/quickstart/IntroNotebooks/4.%20Using%20PyTorch%20through%20ONNX.ipynb) #### [2022.08。How to Deploy YOLOv7 to Jetson Nano](https://blog.roboflow.com/how-to-deploy-yolov7-to-jetson-nano/) - 部屬在[inference-server container](https://docs.roboflow.com/inference/nvidia-jetson) - 使用roboflow自家的Docker container,然後再安裝需要的libaries - Jetpack只支援到4.5.1版 - [x] 直接使用.pt。未使用TensorRT加速 時間點較早,需要較多手動安裝。應該可以找到相關套件都包裝好的container #### [2022.12。Deploy YOLOv7 to Jetson Nano for Object Detection](https://www.hackster.io/spehj/deploy-yolov7-to-jetson-nano-for-object-detection-6728c3) - 使用"virtualenv and virtualenvwrapper"進行環境管理,還需要額外設定OPENCV的路徑 - I've tested YOLOv7 algorithm with ==PyTorch 1.8 and 1.9 and both worked fine== - 作業系統可能是Jetpack4.6.x(L4T: R32.x),才有辦法安裝pytorch arm64 1.8.0版 - 根據[PyTorch for Jetson Platform](https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html#pytorch-jetson-rel) ==Jetpack5.1 只能搭載 pytorch 1.14版==... ![](https://i.imgur.com/ROYdrEA.png) - [x] 直接使用.pt。未使用TensorRT加速 不推薦使用虛擬環境做隔離,因為虛擬環境隔離性不夠易出錯,應直接使用docker ### [2022.09。NVIDIA Jetson 系列文章(12):创建各种YOLO-l4t容器](https://zhuanlan.zhihu.com/p/566452918) - [x] 直接使用.pt - 未使用TensorRT加速 - > 至于未来要再配合 TensorRT 加速引擎的使用,也推荐使用 l4t-ml:r34.x.y-py3 镜像,为个别项目创建独立的容器 - 用的container tag是l4t-ml:r34.1.1-py3。 - l4t-ml包含PyTorch、TensorFlow、scikit-learn 、JupyterLab等各種主流機器學習的套件 - ==OPENCV問題== > - “pip3 install” 安裝 Python 版OpenCV之後,發現這種安裝方式在 x86 平台是可行,但是在 Jetson 平台不能被正常調用。 > - 由於 l4t-ml 鏡像內已經好 OpenCV 環境,因此我們必須先把 3 個開源項目的 requirements.txt 里的 “opencv-python” 用 “#” 號進行關閉,否則會造成調用失敗的問題。 #### [2022.12。YOLOv7 部署到 TensorRT(C++ )](https://zhuanlan.zhihu.com/p/556570703) - [x] 直接使用.pt - [x] 使用TensorRT加速 - 修改原始碼產出ONNX格式 - 再採用 [shouxieai/tensorRT_Pro](https://github.com/shouxieai/tensorRT_Pro/blob/main/tutorial/README.zh-cn.md)中的TensorRT Python模型推理源碼部屬(轉成.trt格式) - 註:應該可以跳過轉成ONNX中繼格式這步 - TensorRT 部署前後運行時間對比(不同編譯環境比對) ![](https://i.imgur.com/31ycNHq.png =600x) #### [Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK](https://wiki.seeedstudio.com/YOLOv8-DeepStream-TRT-Jetson/) ## Github - [shouxieai/tensorRT_Pro](https://github.com/shouxieai/tensorRT_Pro/blob/main/tutorial/README.zh-cn.md) - [2022.12。YOLOv7 部署到 TensorRT(C++ )](https://zhuanlan.zhihu.com/p/556570703)這篇所採用的方法 - ~~[triple-Mu/YOLOv8-TensorRT](https://github.com/triple-Mu/YOLOv8-TensorRT)~~ - [YOLO Series TensorRT Python/C++](https://github.com/Linaom1214/TensorRT-For-YOLO-Series) - 官方建議採用的轉換方式

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully