# YOLOv4實作教學(使用官方repo)
###### tags: `教學` `YOLOv4`
:::info
:boy: **作者:** neverleave0916
:mailbox_closed: **聯絡資訊:<neverleave0916@gmail.com>**
:point_right: **<font color="#B24B42">修改日期:** 2021/10/10 17:06</font>
:::
>- [Docker Hub](https://hub.docker.com/repository/docker/neverleave0916/yolo_v4)
>- [Github](https://github.com/neverleave0916/docker-yolo_v4)
>- YOLOv4 in Ubuntu(實作不限制作業系統):video_game:
>- **上次測試時間:2021.03.23**
>- 若沒有特別說明路徑,clone專案後,路徑都是darknet的根目錄(./darknet/)。
> - 軟體版本(已測試):
| Software | Version | Version |
|:--------------|:--------|:--------|
| Ubuntu | 18.04 | 20.04 |
| Docker | 19.03.8 | 20.10.5 |
| Nvidia Driver | 440.82 | 450.102 |
| CUDA | 10.2 | 11.0 |
# 前言:penguin:
* 本專案使用Docker實作,因此只要有Docker環境即**不限制作業系統**:grey_exclamation:
* 不使用Docker也可跳過啟動Docker之步驟,直接進行安裝
* 若遇到問題歡迎留言,或寄信詢問喔!寄信回覆更快速:sparkles:
* 本教學與官方教學相同,並使用官方程式碼,請安心服用:pig2:
# 近期更新
* 新增**遠端桌面**之教學
* 新增/更新啟動Docker Container之參數說明
* 新增YOLOv4執行過程之文字說明
<br><br>
# 教學開始,各位加油!
## 1.啟動Docker Container
#### (選項一)使用已經裝好環境的Image
<font color="#B24B42">完成後請跳過其餘安裝步驟,跳至 **3.測試YOLOv4** </font>
[使用之docker image頁面](https://hub.docker.com/r/neverleave0916/yolo_v4)
> 此image僅安裝了官方指定必裝之套件與遠端桌面伺服器,可放心使用
> 此image之建立過程可參考[這裡](https://hackmd.io/@neverleave0916/Hyvoh_O1D)
啟動Docker Container
```console=
docker run --gpus all --ipc=host -it -v {{實體主機工作目錄路徑}}:/workspace -p 8090:8090 -p 5901:5901 --name=yolov4 neverleave0916/yolo_v4
```
* --gpus all 用來開啟所有gpu供container使用
* port 8090可用來即時顯示訓練情況,於訓練時在瀏覽器打開http://ip:8090即可(首次運行會轉圈圈很久,為正常情況)
* port 5901可用來開啟xfce桌面環境(已預先安裝,需更改密碼,詳見下述第五點說明)
* --name用來取名container,可自行修改,若沒有設定則啟用後難以辨認哪個container是當下開啟的
* 實體主機工作目錄路徑(範例:/mnt/MIL/neverleave0916):為綁定實體主機的檔案路徑至Container裡面的檔案路徑,避免docker container關閉後檔案遺失
#### (選項二)自行安裝環境
<font color="#B24B42">完成後需繼續往下安裝 </font>
```console=+
docker run --gpus all --ipc=host -it -v {{實體主機工作目錄}}:/workspace -p 8090:8090 -p 5901:5901 --name=yolov4 nvcr.io/nvidia/pytorch:20.03-py3
```
#### APT-GET錯誤排除
[APT-GET“Couldn’t create temporary file for passing config to apt-key”问题解决](https://www.kaijia.me/2017/08/apt-get-couldnt-create-temporary-file-for-passing-config-to-apt-key-issue-solved/)
```console=+
chmod 777 /tmp
apt-get update
```
## 2.安裝YOLO環境需求
以下教學會安裝以下YOLOv4需使用到的套件
* CMake >= 3.12
* CUDA >=10.0 (For GPU)
* OpenCV >= 2.4 (For CPU and GPU)
* cuDNN >= 7.0 for CUDA 10.0 (for GPU)
* OpenMP (for CPU)
* Other Dependencies: make, git, g++
### (1).CMake >= 3.8 (for modern CUDA support)(CPU and GPU)
* 安裝CMake
```console=
apt install cmake
```
* 如要查看版本
```console=
cmake --version
output:cmake version 3.14.0
```
### (2). CUDA >=10.0 (For GPU)
請自行安裝CUDA,若已安裝可忽略
### (3).cuDNN
請自行安裝cuDNN,若已安裝可忽略
### (4).OpenCV >= 2.4 (For CPU and GPU)
**Note:** <font color="RED">The OpenCV is an optional install YOLO, but if you install it, you will get a window system to display the output image from YOLO detection. Otherwise, the output image from YOLO will be saved as an image file. I have enabled OpenCV for this tutorial so that you can see the output of YOLO in a window.</font>
請注意,網路上其餘安裝OpenCV的方法,皆測試過會失敗,請盡量按照下列方法安裝
* 安裝OpenCV(使用apt)
```console=
apt install libopencv-dev python3-opencv
```
* 如要查看版本
```console=
opencv_version
output:3.2.0
```
### (5). 其他需求
```console=
apt install make git g++
```
## 3.測試YOLOv4
環境已經準備完成,要開始執行YOLOv4了~加緊步伐!就快成功了!
### a.自github下載專案
```console=
git clone https://github.com/AlexeyAB/darknet
chmod -R 777 darknet/
```
### b.使用make進行編譯
1. 開啟Makefile檔案
```console=
cd darknet
vim Makefile
```
2. 修改下列參數以使用GPU訓練並顯示圖片
```console=
GPU=1
CUDNN=1
CUDNN_HALF=1
OPENCV=1
AVX=0
OPENMP=0
LIBSO=1
ZED_CAMERA=0 # ZED SDK 3.0 and above
ZED_CAMERA_v2_8=0 # ZED SDK 2.X
DEBUG=1 //跑影片
```
3. 編譯
```console=
make
```
<font color="#B24B42">2021.03.23更新:</font>若在編譯過程中發生**以下**錯誤,可參考[本篇](https://blog.csdn.net/qq_38147884/article/details/114821426)解決方法。
```console
Makefile:186: recipe for target 'obj/network_kernels.o' failed
```
感謝 @Allian 留言提供解決方法 !
### c.測試YOLOv4
1. 取得預訓練權重檔
```console=
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights
```
2. 測試YOLOv4(使用單張圖片)
:::warning
注意:若沒有連接桌面(GUI)環境,請在指令後加上-dont_show參數,關閉結果顯示視窗(若沒有加,會因找不到桌面環境報錯)。
:::
>[參數說明](https://www.cnblogs.com/xieqi/p/9818056.html)
-i/-gpu:指定单个gpu,默认为0,eg:-gpu 2
-gpus:指定多个gpu,默认为0,eg:-gpus 0,1,2
-thresh:显示被检测物体中confidence大于等于 [-thresh] 的bounding-box
```console=
./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights data/person.jpg -i 0 -thresh 0.25
```
3. 檢視成果
如果測試成功,會在目錄下生成predictions.jpg檔案
## 4.訓練自己的資料集
以下教學會準備下列檔案
* ./yolov4.conv.137
* ./yolo-obj.cfg
* ./data/obj.names
* ./data/obj.data
* ./data/obj/所有照片和標籤
* ./data/train.txt
### (1). 下載預訓練權重檔(yolov4.conv.137)
請下載檔案並放入專案根目錄
```console=
wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
```
### (2). 產生yolo-obj.cfg
1. 複製 ./cfg/yolov4-custom.cfg 的內容到 ./yolo-obj.cfg
* 可自行新增一個空白檔(yolo-obj.cfg)並貼上yolov4-custom.cfg的內容
* 亦可複製檔案至yolo根目錄後重新命名
3. **第6行** change line batch to [batch=64](https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L3)
4. **第7行** change line subdivisions to [subdivisions=16](https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L4)
5. **第20行** change line max_batches to (classes*2000 but not less than number of training images, and not less than 6000), f.e. [max_batches=6000](https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L20) if you train for 3 classes
6. **第22行** change line steps to 80% and 90% of max_batches, f.e. [steps=4800,5400](https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L22)
7. **第8,9行** set network size width=416 height=416 or any value multiple of 32:
8. **第970,1058,1146行** change line classes=80 to your number of objects in each of 3 [yolo]-layers:
9. **第963,1051,1139行** change [filters=255] to filters=(classes + 5)x3 in the 3 [convolutional] before each [yolo] layer, keep in mind that it only has to be the last [convolutional] before each of the [yolo] layers.
10. (預設沒用到,可忽略)when using [Gaussian_yolo] layers, change [filters=57] filters=(classes + 9)x3 in the 3 [convolutional] before each [Gaussian_yolo] layer
So if classes=1 then should be filters=18. If classes=2 then write filters=21.
(Do not write in the cfg-file: filters=(classes + 5)x3)
### (3).產生obj.names
在./data/中產生obj.names檔案,一行為一個objects names,以下為範例
```console=
people
car
```
### (4).產生obj.data
在./data/中產生obj.data檔,classes = number of objects
```console=
classes= 2
train = data/train.txt
#注意,若不需計算mAP,或沒有test.txt檔案,請將下面的text.txt改為train.txt
valid = data/test.txt
names = data/obj.names
backup = backup/
```
### (5). 照片和標籤
在./data/obj/中放入所有照片與標籤,舉例來說,
```console
img1.jpg
img1.txt
```
而img1.txt應包含類似下面的內容:
```console
1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
```
### (6). 產生train.txt
在./data/產生train.txt,其中包含了所有照片與執行檔的相對路徑,舉例來說:
```console
data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
```
### (7).開始訓練!
```console=
./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137 -dont_show -mjpeg_port 8090 -map
```
### (8).測試成果
如果有GUI環境,可刪除最後的 -dont_show以顯示畫面
#### 照片(test.jpg)
> 若辨識成功便會輸出 predictions.jpg
```console=
./darknet detector test data/obj.data yolo-obj.cfg backup/yolo-obj_final.weights test.jpg -dont_show
```
#### 影片(test.mp4)
> 若辨識成功便會輸出 rosio.avi
```console=
./darknet detector demo data/obj.data yolo-obj.cfg backup/yolo-obj_final.weights test.mp4 -out_filename rosio.avi -dont_show
```
## 5.啟動遠端桌面環境(VNC伺服器)=>選填
啟用VNC伺服器便可在命令列的環境下使用桌面環境,檢視訓練成果等。
* 若使用自行安裝之docker,請參考[此篇說明](https://hackmd.io/e1vCtjb0RzyntxhFv_LVpA)自行安裝
* 若使用我提供之映象檔,因已預裝VNC伺服器與xfce桌面環境,請先輸入指令更改VNC伺服器之登入密碼
(1). 更改VNC Server密碼:
```console=
vncpasswd
```
(2). 手動啟動VNC Server:
```console=+
vncserver -geometry 1440x900
```
故障排除 [vnc已開啟,為什麼卻連不上 !?](http://phorum.study-area.org/index.php?topic=39470.0)
> 若曾經開啟5個以上的伺服器,請自行增加下面的數字
```console=+
vncserver -kill :1
vncserver -kill :2
vncserver -kill :3
vncserver -kill :4
vncserver -kill :5
rm /tmp/.X11-unix/X1
rm /tmp/.X11-unix/X2
rm /tmp/.X11-unix/X3
rm /tmp/.X11-unix/X4
rm /tmp/.X11-unix/X5
rm /tmp/.X1-lock
rm /tmp/.X2-lock
rm /tmp/.X3-lock
rm /tmp/.X4-lock
rm /tmp/.X5-lock
vncserver -geometry 1440x900
```
### 參考資料:
- [A Gentle Introduction to YOLO v4 for Object detection in Ubuntu 20.04](https://robocademy.com/2020/05/01/a-gentle-introduction-to-yolo-v4-for-object-detection-in-ubuntu-20-04/)
- [使用C版YOLOv4在自己的数据集上训练测试
](https://zhuanlan.zhihu.com/p/137297186)
---
<br><br><br><br><br><br><br><br><br>
<br><br><br><br><br><br><br><br><br>
## 以下是我自己的指令,僅供參考
```console!
docker run --gpus all --ipc=host -it -v /mnt/MIL/neverleave0916/code:/workspace -p 1024:8888 -p 5901:5901 -p 8070:8070 -p 8090:8090 --name nl-yolov4 nvcr.io/nvidia/pytorch:20.03-py3
```
## 以下為失敗方法,僅供參考
安裝CMake需要openssl
```console=+
apt-get install libssl-dev
```
CMake : 需要CMake>=3.8,CMake在[这里](https://link.zhihu.com/?target=https%3A//cmake.org/download/)获取,选择"Unix/Linux Source (has \n line feeds)"后对应的文件下载,使用如下命令编译安装
```console=+
wget https://github.com/Kitware/CMake/releases/download/v3.17.2/cmake-3.17.2.tar.gz
tar xvzf cmake-3.17.2.tar.gz
cd cmake-3.17.2
./configure
make -j
make install
```
```console=+
cd ..
```
pip install opencv_python-4.2.0.34-cp36-cp36m-manylinux1_x86_64.whl
https://stackoverflow.com/questions/47647035/cmake-prefix-path-doesnt-help-cmake-in-finding-qt5/47647645
apt-get install qtbase5-dev
apt-get install qtdeclarative5-dev
OpenCV : 需要OpenCV>=2.4,OpenCV在[这里](https://link.zhihu.com/?target=https%3A//opencv.org/releases/)获取,选择Sources下载,使用如下命令编译安装
apt-get install zip
```console=+
#wget https://github.com/opencv/opencv/archive/4.3.0.zip
#取得opencv-4.2.0.zip
unzip opencv-4.2.0.zip # 注意替换成你下载的文件名,下同
cd opencv-4.2.0
mkdir build
cd build
cmake ..
make -j
make install
```
```console=+
cd ../../
```
## YOLO
先把整個darknet資料夾準備好
```console=+
cd darknet
./build.sh
./darknet detector train data/obj.data yolo-obj.cfg yolov4.conv.137
```
apt-get install libgtk2.0-dev
apt-get install pkg-config
apt-get install libopencv-dev python-opencv
apt-get install build-essential cmake pkg-config
jupyter notebook --allow-root --no-browser --ip=0.0.0.0
### Couldn't open file: /backup//yolo-obj_100.weights
[Couldn't open file: /backup//yolo-obj_100.weights](https://github.com/pjreddie/darknet/issues/416)
在根目錄中手動建立資料夾
```console=+
mkdir backup
```
# 測試
./darknet detector test data/obj.data yolo-obj.cfg backup/yolo-obj_final.weights test.jpg -dont_show
./darknet detector demo data/obj.data yolo-obj.cfg backup/yolo-obj_final.weights ㄍ
./darknet detector demo data/obj.data yolo-obj.cfg backup/yolo-obj_final.weights test.mp4 -out_filename rosio.avi -dont_show
https://medium.com/@yanweiliu/nvidia-jetson-tx2%E5%AD%B8%E7%BF%92%E7%AD%86%E8%A8%98-%E4%B8%89-%E5%AE%89%E8%A3%9Dopencv-c62e2435ad57
如果要辨識影片的話,一定要有OpenCV(Makefile裡面的OPENCV=1)
並且指令要修改
#偵測影片
$ ./darknet detector demo cfg/coco.data cfg/yolov3.cfg yolov3.weights data/bird.mp4
#偵測圖片
$ ./darknet detector test cfg/voc.data cfg/yolov3-voc.cfg yolov3.weights data/bird.jpg