伴伴學 Accomdemy
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Write
        • Owners
        • Signed-in users
        • Everyone
        Owners Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
      • Invitee
    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Engagement control
    • Transfer ownership
    • Delete this note
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Sharing URL Help
Menu
Options
Versions and GitHub Sync Engagement control Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Write
Owners
  • Owners
  • Signed-in users
  • Everyone
Owners Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
Invitee
Publish Note

Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

Your note will be visible on your profile and discoverable by anyone.
Your note is now live.
This note is visible on your profile and discoverable online.
Everyone on the web can find and read all notes of this public team.
See published notes
Unpublish note
Please check the box to agree to the Community Guidelines.
View profile
Engagement control
Commenting
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
  • Everyone
Suggest edit
Permission
Disabled Forbidden Owners Signed-in users Everyone
Enable
Permission
  • Forbidden
  • Owners
  • Signed-in users
Emoji Reply
Enable
Import from Dropbox Google Drive Gist Clipboard
   owned this note    owned this note      
Published Linked with GitHub
3
Subscribed
  • Any changes
    Be notified of any changes
  • Mention me
    Be notified of mention me
  • Unsubscribe
Subscribe
--- tags: 課程規劃 --- # Darknet YOLO 入坑手冊 [TOC] # 電腦視覺 物件偵測 簡介 https://pjreddie.com/darknet/yolo/ ameba pro2 對 yolo 的支援時間會比較晚。 [Vision Transformer(ViT)重點筆記](https://hackmd.io/@YungHuiHsu/ByDHdxBS5) 近年的主流。 [YOLO v4 模型訓練實作](https://ithelp.ithome.com.tw/articles/10282549?sc=pt) [目標檢測 YOLO v1-v5 全版本差異](https://u9534056.medium.com/%E7%9B%AE%E6%A8%99%E6%AA%A2%E6%B8%AC-yolo-v1-v5-%E5%85%A8%E7%89%88%E6%9C%AC%E5%B7%AE%E7%95%B0-c648cc5e49f1) # 活動推廣 * 每週一次,30分鐘~一小時一次。以引導為主。短講。 * 提問放在最後。 # 活動內容 * 以讀書會模式進行 * 架構分享 * 基礎功能操作 * YoLo分支的想法 # 前置準備 ## 學習路線 (入門到入土) 因考量時程上的關係,本章針對Yolo基礎與如何快速實現 ```graphviz digraph hierarchy { nodesep=1.0 // increases the separation between nodes node [color=Green,fontname=Courier,shape=box] //All nodes will this shape and colour edge [color=Blue, style=dashed] //All the lines look like this Yolo前置準備工作->{硬體 軟體 知識} 硬體->{一台很讚的電腦 一片很讚的GPU 環境建置的步驟} 環境建置的步驟->{GPU驅動安裝 CUDA安裝 cuDNN安裝} 軟體->{Linux Docker Conda Python Matlab} Python->{Pytorch} Docker->{什麼事Docker Docker好處是什麼 Docker常用的指令} Conda->{什麼事Conda Conda好處是什麼 Conda常用的指令} Matlab->{什麼事Matlab Matlab好處是什麼 Matlab常用的指令} 知識->{Python基礎語法 什麼是NN 目前AI能做到什麼} {rank=same;硬體 軟體 知識} // Put them on the same level } ``` ## 什麼是機械學習 - 介紹 https://www.wikiwand.com/zh-tw/%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0 - 有監督 無監督https://ai4dt.wordpress.com/2018/05/25/%E4%B8%89%E5%A4%A7%E9%A1%9E%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92%EF%BC%9A%E7%9B%A3%E7%9D%A3%E5%BC%8F%E3%80%81%E5%BC%B7%E5%8C%96%E5%BC%8F%E3%80%81%E9%9D%9E%E7%9B%A3%E7%9D%A3%E5%BC%8F - - https://www.youtube.com/watch?v=sB_IGstiWlc - ## 什麼是NN(神經網路) - CNN vs RNN https://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.htmlhttps://cs.stanford.edu/people/karpathy/convnetjs/demo/classify2d.html ## 什麼是 Yolo https://pjreddie.com/darknet/yolo/ ## Dataset是什麼 - 為什麼需要Dataset - YoloV7資料集結構(範例) ![](https://i.imgur.com/huatS0c.png) 分別有測試,訓練,驗證 ![](https://i.imgur.com/O8d1kJs.png) data.yaml訓練時會使用到 - 常用整合的Dataset網站: https://public.roboflow.com/ https://www.kaggle.com/datasets ## GPU或是NPU 我需要什麼樣的設備呢? 如果沒GPU就不能做深度學習了嗎? ## 開發環境的建置 - CUDA, cuDNN - Docker - Python(Pytorch) - Jupyter ## 基礎程式設計能力 - Python - Matlab - OpenCV https://zh.wikipedia.org/zh-tw/OpenCV OpenCV在電腦視覺領域極常使用到,不只是在AI領域, 可以用於做影像相關的(顏色,物件邊緣,光流):可以將影響從RGB轉成GBR排序方式,亦可轉為灰皆或是更進一步將影像二元化,可以看作代碼版本的photoshop...![](https://i.imgur.com/BuO7P5c.png) https://blog.csdn.net/jjddss/article/details/72841141 在OpenCV中標準安裝以外,亦可透過安裝OpenCV的contrib,實現人臉偵測或是TCR ![](https://i.imgur.com/iYyJUfa.jpg) https://pyimagesearch.com/2018/09/24/opencv-face-recognition/ ![](https://i.imgur.com/IPDGUPz.jpg) https://pyimagesearch.com/2018/09/17/opencv-ocr-and-text-recognition-with-tesseract/ - 统计学 ## 命令列基本操作 - Linux - Windows ## 參考資料 - https://hackmd.io/@neverleave0916/YOLOv4 - https://aiacademy.tw/yolo-v4-intro/ - https://zh-v2.d2l.ai/d2l-zh.pdf PDF版本教材 - https://github.com/d2l-ai/d2l-zh 不错的教材 - https://zhuanlan.zhihu.com/p/39542494 Pytroch设置conda - www.paperswithcode.com 代碼跟文獻 ## GITHUB參考資料 - Matlab運行代碼(初學者可以如有MATLAB授權可快速測試) https://github.com/matlab-deep-learning/pretrained-yolo-v4 - Pytorch Github資源 https://github.com/bubbliiiing/yolov4-tiny-pytorch - Darknet Github https://github.com/AlexeyAB/darknet # YOLO訓練環境與過程(Pytorch) 1.UBUNTU Linux OK / Windows尚未測試 2.安裝完成Nvidia驅動,CUDA 3.使用CMD建立Python虛擬環境 ``` $ git clone https://github.com/WongKinYiu/yolov7 #YoloV7Github $ wget https://github.com/WongKinYiu/yolov7/releases/download/v0.1/yolov7.pt $ unzip Aquarium\ Combined.v2-raw-1024.yolov7pytorch.zip -d Aquarium #解壓縮Dataset,此數據結構基於yolov7 $ conda create -n yolov7 python=3.8 #建立env $ pip install -r requirements.txt #安裝python相依套件 $ pip install numpy==1.20.3 #yolov7有另外要求numpy版本,不這麼做會報錯 ``` 4.資料集建置或下載 參考路徑 [https://public.roboflow.com/object-detection/aquarium/2/download/yolov7pytorch](https://public.roboflow.com/object-detection/aquarium/2/download/yolov7pytorch) ![](https://i.imgur.com/vj0SFHy.png) 使用Yolo v7 PyTorch ![](https://i.imgur.com/7i8rgb2.png) 下載完成後放在yolov7目錄底下(不喜歡的同學可以選擇不加) 確認環境建制完成後 如果需要指定GPU 可下 --device <num_gup> 如有多張顯示卡 不指定狀況下是都使用 --batch 4 (預設是64) --data 資料集所在的yaml文件,裡面會描述路徑,標籤與影像數量 ![](https://i.imgur.com/hQl42no.png) 此文間有經過修改將,Path改為絕對路徑 --cfg 選擇需要訓練的yolov7規格,路徑中包含v7-tiny版本 ![](https://i.imgur.com/p6XLsw8.png) --img 設定影像尺寸 --hyp 超參數文件路徑(有興趣的同學可以自行調整) 更多細節可以參考github文件 或者可以參考[https://blog.csdn.net/weixin_51697369/article/details/123446928](https://blog.csdn.net/weixin_51697369/article/details/123446928) ``` $ python train.py --batch 4 --cfg cfg/training/yolov7.yaml --img 768 1024 --epochs 55 --data ./Aquarium/data.yaml --weights '' --name yolov7 --hyp data/hyp.scratch.p5.yaml ``` ![](https://i.imgur.com/E71hoXo.png) 可以使用nvtop進行使用資源檢視如下圖 ![](https://i.imgur.com/LOADgpu.png) 訓練過程基於設備(CPU/GPU)規格有所差異,等待完成後 ``` 55 epochs completed in 0.215 hours. Optimizer stripped from runs/train/yolov717/weights/last.pt, 74.9MB Optimizer stripped from runs/train/yolov717/weights/best.pt, 74.9MB ``` 偵測 --conf 設定信心值 約低越容易被偵測到,但相對月不準確 --source 圖片路徑 ``` python detect.py --weights ./runs/train/yolov718/weights/best.pt --conf 0.1 --source ./Aquarium/valid/images/IMG_2279_jpeg_jpg.rf.c93235205522529fc7e9626bf9175cba.jpg #使用單張圖片 ``` ![](https://i.imgur.com/qdgpimw.png) 好多好多魚魚<>< ~ <3 ![](https://i.imgur.com/cmGWUid.jpg) 測試集結果 ![](https://i.imgur.com/vfUmPyT.jpg) 訓練集結果 ![](https://i.imgur.com/f822QNy.jpg) 混淆矩陣 ![](https://i.imgur.com/UX7RTvQ.png) 使用攝影機 ``` python detect.py --weights ./runs/train/yolov718/weights/best.pt --conf 0.1 --source 0 #使用vidoe0(webcam) ``` ![](https://i.imgur.com/IfRNx0v.png) # YOLO訓練環境與過程(Darknet) ## Darknet 與流行的Tensorflow以及Caffe框架相比,Darknet框架在某些方面有着自己獨特的優勢。 YOLOv3:YOLOv3在YOLOv2的基礎上進行了改進,引入了多尺度預測和殘差結構。它使用Darknet-53作為其主要的特徵提取器,並增加了對更多物體類別的支持。此外,YOLOv3還對損失函數進行了改進,以提高對小物體的檢測精度。 YOLOv4:YOLOv4在YOLOv3的基礎上進一步改進了性能和速度。它使用CSPDarknet53作為特徵提取器,並引入了Bag of Freebies(BoF)和Bag of Specials(BoS)的概念,用於提高檢測精度。此外,YOLOv4還引入了Mish激活函數、PANet脊椎網路等新技術,以實現更好的性能。 YOLOv5:YOLOv5主要集中在改進速度和部署方便性。它採用了更輕量級的網絡結構,並且將框架從Darknet轉換到PyTorch。YOLOv5還引入了新的錨點機制,以及改進的數據增強策略,以實現更高的檢測精度。儘管YOLOv5在性能上有所提高,但它在社區中引起了一些爭議,因為其版本命名和開發過程與之前的YOLO版本不同。 ### 關於Darknet深度學習框架 Darknet深度學習框架是由Joseph Redmon提出的一個用C和CUDA編寫的開源神經網絡框架。它安裝速度快,易於安裝,並支持CPU和GPU計算。 你可以在GitHub上找到源代碼: https://github.com/pjreddie/darknet 你也可以在官網上閱讀完成更多事情: https://pjreddie.com/darknet/ 轉自 https://www.twblogs.net/a/5c840d3dbd9eee35fc13cdc4 ### 安裝編譯工具 ``` sudo apt update sudo apt upgrade sudo apt-get install build-essential cmake ``` ### 安裝 cuDNN https://developer.nvidia.com/rdp/cudnn-download ``` cd cudnn-linux-x86_64-8.7.0.84_cuda11-archive cp include/* /usr/local/cuda/include/ sudo cp include/* /usr/local/cuda/include/ sudo cp lib/* /usr/local/cuda/lib64 cd /usr/local/cuda/lib cd /usr/local/cuda/lib64 j darktet cd build cmake .. cmake --build . --target install --parallel 8 .darknet ``` ### install opencv= =3.4.19 opencv_contrib==3.4.19 ``` git clone https://github.com/opencv/opencv cd opencv git checkout 3.4.19 git clone https://github.com/opencv/opencv_contrib cd opencv_contrib git checkout 3.4.19 cd .. mkdir -p build && cd build cmake -D CMAKE_BUILD_TYPE=RELEASE \ -D CMAKE_INSTALL_PREFIX=/usr/local \ -D WITH_CUDA=ON \ -D WITH_CUDNN=ON \ -D OPENCV_DNN_CUDA=ON \ -D ENABLE_FAST_MATH=1 \ -D CUDA_FAST_MATH=1 \ -D WITH_CUBLAS=1 \ -D INSTALL_PYTHON_EXAMPLES=ON \ -D OPENCV_EXTRA_MODULES_PATH=../opencv_contrib/modules .. \ -D WITH_QT=ON \ -D WITH_GTK=ON \ -D WITH_OPENGL=ON \ -D BUILD_EXAMPLES=ON .. make -j12 sudo make install sudo ldconfig -v opencv_version ``` 下圖為3.4.16版本...上方命令已將改為3.4.19 ![](https://i.imgur.com/MK2WjLE.png) ![](https://i.imgur.com/0tUjQIQ.png) ![](https://i.imgur.com/WtAhL7D.png) ![](https://i.imgur.com/FxiEwvn.png) ### 安裝Darknet ``` #git clone https://github.com/pjreddie/darknet git clone https://github.com/arnoldfychen/darknet //改為這個作者fork可解決GPU版本問題 cd darknet make wget https://pjreddie.com/media/files/tiny.weights ./darknet classify cfg/tiny.cfg tiny.weights data/dog.jpg ``` 你應該可以看到 ``` data/dog.jpg: Predicted in 0.160994 seconds. malamute: 0.167168 Eskimo dog: 0.065828 dogsled: 0.063020 standard schnauzer: 0.051153 Siberian husky: 0.037506 ``` #### 可能會出現的錯誤 - 顯示卡較新 https://blog.csdn.net/XCCCCZ/article/details/112793411 - opencv 版本較為舊 https://www.cxyzjd.com/article/weixin_41840088/114594072 出現 ` ./darknet: error while loading shared libraries: libopencv_highgui.so.3.4: cannot open shared object file: No such file or directory./darknet: error while loading shared libraries: libopencv_highgui.so.3.4: cannot open shared object file: No such file or directory ` 補上 ``` sudo apt-get install libopencv-highgui-dev ``` ### Darkent 測試 yolov3 ``` wget https://pjreddie.com/media/files/yolov3.weights ./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg ``` ![](https://i.imgur.com/uorx7NL.jpg) yolov7-tiny ``` wget https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7-tiny.weights ./darknet detect cfg/yolov7-tiny.cfg yolov7-tiny.weights data/dog.jpg ``` v7Tiny運行結果 ![](https://i.imgur.com/6rqgN7t.png) ### Darknet 訓練 #### 下載預先訓練 https://github.com/pjreddie/darknet/issues/2557 Darknet cfg/weights file - currently tested for inference only: cfg: https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov7-tiny.cfg weights: https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7-tiny.weights weights for fine-tuning: https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7-tiny.conv.87 ``` wget weights for fine-tuning: https://github.com/AlexeyAB/darknet/releases/download/yolov4/yolov7-tiny.conv.87 ``` #### Darknet訓練 ##### 準備數據集 ``` cp scripts/get_coco_dataset.sh data cd data bash get_coco_dataset.sh ``` 腳本內容 ![](https://i.imgur.com/p9LIPWZ.png) 等待下載完成 ![](https://i.imgur.com/WIqOTWF.png) ![](https://i.imgur.com/QGcSZLY.png) 修改dataset配置路徑(cfg/coco.data) ![](https://i.imgur.com/R53W5CH.png) ``` ./darknet detector train <*.data路徑> <yolov7-tiny.cfg的路徑> <yolov7-tiny.conv.87的路徑> -map -gpus 0,1 ./darknet detector train cfg/coco.data cfg/yolov7-tiny.cfg yolov7-tiny.conv.87 -gpus 0,1 ``` ![](https://i.imgur.com/9GY6BD6.png) 我的設備 3090ti * 1張 約莫需要250hr 如使用 2080ti * 4張 約莫24hr ![](https://i.imgur.com/UCX5uiI.png) #### Darknet 客製化(yolov4 tiny) 基於https://zh.wikipedia.org/zh-tw/%E7%BE%8E%E5%9C%8B%E6%89%8B%E8%AA%9E%E5%AD%97%E6%AF%8D 本次此次使用 1. https://public.roboflow.com/object-detection/american-sign-language-letters 2. https://blog.roboflow.com/computer-vision-american-sign-language/ ![](https://i.imgur.com/4qqFY9U.png) ##### 修改cfg 1. 於Darknet目錄下建立自定義資料夾(這邊我使用custom_cfg) ![](https://i.imgur.com/zFllQPi.png) 2. 並建立*.data 與 *.names文件 3. 將darknet/cfg中對應的配置文件拷貝至custom_cfg(yolov4-tiny.cfg) 4. 修改filters與classes A. classes 為資料的總類別數量 B. filters 為 ( classes + 5 ) * 3 ##### 下載預訓練 官方提供 https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v4_pre/yolov4-tiny.conv.29 ##### 訓練 ``` ./darknet detector train custom_cfg/sl.data custom_cfg/yolov4-tiny.cfg weights/yolov4-tiny.conv.29 ``` ![](https://i.imgur.com/VGilggk.png) ### yolov4預測(電腦) ``` ./darknet detector test custom_cfg/sl.data custom_cfg/yolov4-tiny.cfg backup_sl/yolov4-tiny_last.weights -gpus #輸入測試圖片路徑 1/home/oliver/code/c/darknet/data/signLanguage/test/B14_jpg.rf.ed5ba6d44f55ab03e62d2baeac4aa1aa.jpg ``` ### yolov4預測(電腦webcam) ``` ./darknet detector demo <custom_cfg/*.data> <custom_cfg/yolov4-tiny_*.cfg> <backup_smd/yolov4-tiny_smd_last.weights> -gpus 2 -c 1 -thresh 0.1 ``` -gpus 使用第二個GPU -c 1 使用/dev/vidoe1 -thresh 0.1閥值 ## 透過Docker部屬Darknet環境 ### dockerfile_opencv3_4_16_darknet(Yolo懶人包) https://github.com/Oliver0804/dockerfile_opencv3_4_16_darknet 這個Dockerfile是用來建立一個深度學習開發環境的映像檔,該環境包含了NVIDIA CUDA、OpenCV,以及Darknet等關鍵元件,並且透過Docker的封裝特性,讓這個環境可以輕易地在不同的系統上進行部署和運行。 - 基底映像檔:我們從具有CUDA 11.0.3支援的Ubuntu 20.04映像檔開始建立我們的環境。 - 工具安裝:接著,我們安裝了一系列的基本工具,包括了C++編譯器、CMake、Git,以及一些其他必要的軟體包。 - Python環境設定:我們也安裝了Python以及相關的函式庫,並設定好了Python的執行環境。 - OpenCV安裝(3.4.16):我們從GitHub上抓取OpenCV的源碼,並進行編譯和安裝。在編譯的過程中,我們有啟用CUDA支援以提供更好的效能。 - Darknet安裝:我們也從GitHub上抓取Darknet的源碼,並進行編譯和安裝。同樣地,我們有啟用CUDA和OpenCV支援。 - X11支援:為了能在Docker環境中顯示圖形介面,我們也進行了相關的設定。 最後,我們設定了預設的命令為啟動bash,這樣當你啟動這個Docker映像檔的時候,就會直接進入bash命令列介面。 #### 使用簡介 1. 使用本github提供的Dockerfile進行映像黨建置 2. 進入容器中使用darknet 3. 在宿主機上clone darknet後於該入路進行操作「容器外操作」 2與3則一使用即可 #### build image ``` docker build -t <your_image_name> --no-cache . ``` ex. docker build -t oliver_darknet --no-cache . 編譯完成後可透過 ``` docker images ``` 查詢是否正確編譯 #### 1.容器內操作(於容器中進行操作) ##### 部屬docker容器 ``` docker run --gpus all -it <your_image_name> ``` ex. docker run --gpus all -it oliver_darknet ##### 下載yolov3權重檔 ``` cd /darknet wget https://pjreddie.com/media/files/yolov3.weights ``` ##### 運行測試 ``` cd /darknet ./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg ``` #### 2.容器外操作(離開即移除該容器,下次會基於映像檔重新部屬新的) ##### 於設備中 任意 clone yolo項目 ``` git clone https://github.com/AlexeyAB/darknet cd ./darknet wget https://pjreddie.com/media/files/yolov3.weights ``` 運行,此命令用運行完後即刻結束(--rm) ![](https://github.com/Oliver0804/dockerfile_opencv3_4_16_darknet/blob/main/pic/%E6%88%AA%E5%9C%96%202023-05-28%20%E4%B8%8B%E5%8D%8810.42.25.png) ``` docker run --gpus all --rm -v $PWD:/workspace -w /workspace <your_image_name> darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg ``` ex. docker run --gpus all --rm -v $PWD:/workspace -w /workspace oliver_darknet darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg ![](https://github.com/Oliver0804/dockerfile_opencv3_4_16_darknet/blob/main/pic/%E6%88%AA%E5%9C%96%202023-05-28%20%E4%B8%8B%E5%8D%8810.43.01.png) ![](https://i.imgur.com/r1Xawdo.jpg) ### yolov4預測(Ameba pro 2) 目前步驟較為繁瑣,且暫無提供離線版本的工具,如果比較在意模型保密的同學可以考慮一下。 1. 轉換模型.nb 官方提供線上轉換工具 https://www.amebaiot.com/en/amebapro2-ai-convert-model/ 訓練好的的weights跟訓練時使用的.cfg檔案打包壓縮成一份zip(不能有中文,包含cfg中的註解也是) ![](https://i.imgur.com/ZgKEAej.png) 上傳完成後約莫需要等10~20分鐘...轉換好的.nb檔案會寄到你信箱。 把他下載下來後就可以進行Arduino的開發。 2. 在等待的時間可以開始setup Arduino的開發環境 具體細節可以參考Ameba pro2 Github => https://github.com/ambiot/ambd_arduino 添加額外開發版管理路徑... ``` https://github.com/ambiot/ambpro2_arduino/raw/main/Arduino_package/package_realtek.com_amebapro2_index.json ``` ![](https://i.imgur.com/7GRGfhc.png) 並選擇Ameba pro 2 ![](https://i.imgur.com/w946fNw.png) 3. 開啟Ameba pro 2 NN 範例 ![](https://i.imgur.com/TGCw4AT.png) 4. 目前對於模型抽換有需要調整,等Realtek夥伴進行更新 這邊直接把剛剛所收到的.nb進行替換,將yolov4_tiny.nb更換成我們自行訓練的(26類)。 .nb放置檔案的位置如下圖path,具體會根據所使用的Amebapro2_SDK版本些許差異 ![](https://i.imgur.com/BHpXBDY.png) 5. 最後,因為原本Realtek提供的是coco dataset 是80類...需要替換成我們自己種類(26類) 修改OSD繪製的標籤清單 ![](https://i.imgur.com/pkeZgvz.png) 改為 ![](https://i.imgur.com/OBGPJiw.png) 修改114的item.name()改為 itemList[obj_type].objectName (2023/02/16測試已從Github上更新,後續版本可以不用進行此步驟) ![](https://i.imgur.com/iiLoTo6.png) 6. Ameba pro2 yolo展示 {%youtube -iYyFTtXmWE %} 7. Ameba pro2 快速替換模型腳本 目前NN模型替換只能從Arduino目錄下進行每次操作(路徑蠻深的) 更換又希望能保存之前的模型,故以下的腳本因此誕生,目前bat只針對windows系統 {%youtube 6IabnHTvXNE %} ### https://docs.google.com/presentation/d/1Yfrjv8MI0mw2o8LRhpUgn6YCPf0URzcRoqOvHbtCipg/edit#slide=id.g203d87f4c89_0_0

Import from clipboard

Paste your markdown or webpage here...

Advanced permission required

Your current role can only read. Ask the system administrator to acquire write and comment permission.

This team is disabled

Sorry, this team is disabled. You can't edit this note.

This note is locked

Sorry, only owner can edit this note.

Reach the limit

Sorry, you've reached the max length this note can be.
Please reduce the content or divide it to more notes, thank you!

Import from Gist

Import from Snippet

or

Export to Snippet

Are you sure?

Do you really want to delete this note?
All users will lose their connection.

Create a note from template

Create a note from template

Oops...
This template has been removed or transferred.
Upgrade
All
  • All
  • Team
No template.

Create a template

Upgrade

Delete template

Do you really want to delete this template?
Turn this template into a regular note and keep its content, versions, and comments.

This page need refresh

You have an incompatible client version.
Refresh to update.
New version available!
See releases notes here
Refresh to enjoy new features.
Your user state has changed.
Refresh to load new user state.

Sign in

Forgot password

or

By clicking below, you agree to our terms of service.

Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
Wallet ( )
Connect another wallet

New to HackMD? Sign up

Help

  • English
  • 中文
  • Français
  • Deutsch
  • 日本語
  • Español
  • Català
  • Ελληνικά
  • Português
  • italiano
  • Türkçe
  • Русский
  • Nederlands
  • hrvatski jezik
  • język polski
  • Українська
  • हिन्दी
  • svenska
  • Esperanto
  • dansk

Documents

Help & Tutorial

How to use Book mode

Slide Example

API Docs

Edit in VSCode

Install browser extension

Contacts

Feedback

Discord

Send us email

Resources

Releases

Pricing

Blog

Policy

Terms

Privacy

Cheatsheet

Syntax Example Reference
# Header Header 基本排版
- Unordered List
  • Unordered List
1. Ordered List
  1. Ordered List
- [ ] Todo List
  • Todo List
> Blockquote
Blockquote
**Bold font** Bold font
*Italics font* Italics font
~~Strikethrough~~ Strikethrough
19^th^ 19th
H~2~O H2O
++Inserted text++ Inserted text
==Marked text== Marked text
[link text](https:// "title") Link
![image alt](https:// "title") Image
`Code` Code 在筆記中貼入程式碼
```javascript
var i = 0;
```
var i = 0;
:smile: :smile: Emoji list
{%youtube youtube_id %} Externals
$L^aT_eX$ LaTeX
:::info
This is a alert area.
:::

This is a alert area.

Versions and GitHub Sync
Get Full History Access

  • Edit version name
  • Delete

revision author avatar     named on  

More Less

Note content is identical to the latest version.
Compare
    Choose a version
    No search result
    Version not found
Sign in to link this note to GitHub
Learn more
This note is not linked with GitHub
 

Feedback

Submission failed, please try again

Thanks for your support.

On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

Please give us some advice and help us improve HackMD.

 

Thanks for your feedback

Remove version name

Do you want to remove this version name and description?

Transfer ownership

Transfer to
    Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

      Link with GitHub

      Please authorize HackMD on GitHub
      • Please sign in to GitHub and install the HackMD app on your GitHub repo.
      • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
      Learn more  Sign in to GitHub

      Push the note to GitHub Push to GitHub Pull a file from GitHub

        Authorize again
       

      Choose which file to push to

      Select repo
      Refresh Authorize more repos
      Select branch
      Select file
      Select branch
      Choose version(s) to push
      • Save a new version and push
      • Choose from existing versions
      Include title and tags
      Available push count

      Pull from GitHub

       
      File from GitHub
      File from HackMD

      GitHub Link Settings

      File linked

      Linked by
      File path
      Last synced branch
      Available push count

      Danger Zone

      Unlink
      You will no longer receive notification when GitHub file changes after unlink.

      Syncing

      Push failed

      Push successfully