# yolov4 on docker in Ubuntu 20.04
- https://iter01.com/526543.html
- https://github.com/ikuokuo/start-yolov4
- 如果對 Docker 不熟悉: https://ithelp.ithome.com.tw/articles/10186431
- opencv 預設會占用大量 /var 空間,可參考 https://www.gushiciku.cn/pl/gpxj/zh-tw 遷移 docker 位置
環境
- yolov4
- docker
- Ubuntu 20.04
- GTX 1050
YOLO 演算法是非常著名的目標檢測演算法。從其全稱 You Only Look Once: Unified, Real-Time Object Detection ,可以看出它的特性:
- Look Once: one-stage (one-shot object detectors) 演算法,把目標檢測的兩個任務分類和定位一步完成。
- Unified: 統一的架構,提供 end-to-end 的訓練和預測。
- Real-Time: 實時性,初代論文給出的指標 FPS 45 , mAP 63.4 。
YOLOv4: Optimal Speed and Accuracy of Object Detection ,於今年 4 月公佈,採用了很多近些年 CNN 領域優秀的優化技巧。其平衡了精度與速度,目前在實時目標檢測演算法中精度是最高的。
論文地址:
- YOLO: https://arxiv.org/abs/1506.02640
- YOLO v4: https://arxiv.org/abs/2004.10934
原始碼地址:
- YOLO: https://github.com/pjreddie/darknet
- YOLO v4: https://github.com/AlexeyAB/darknet
本文將介紹 YOLOv4 官方 Darknet 實現,如何於 Docker 編譯使用。以及從 MS COCO 2017 資料集中怎麼選出部分物體,訓練出模型。
主要內容有:
- 準備 Docker 映象
- 準備 COCO 資料集
- 用預訓練模型進行推斷
- 準備 COCO 資料子集
- 訓練自己的模型並推斷
- 參考內容
## 安裝 Docker 與其他環境
### 安装 Nvidia Driver
推荐使用 graphics drivers PPA 安装 Nvidia 驱动。
```
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
```
检测推荐的 Nvidia 显卡驱动:
```
ubuntu-drivers devices
```
安装 Nvidia 驱动(以下是 RTX2060 上的情况):
```
# Ubuntu 16.04 only search 430 for CUDA < 10.2
apt-cache search nvidia
sudo apt install nvidia-430
# Ubuntu 18.04 could search 440 for CUDA <= 10.2
apt-cache search nvidia | grep ^nvidia-driver
sudo apt install nvidia-driver-440
```
> 驱动对应的 CUDA 版本,请见 CUDA Compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/index.html 。
最后, `sudo reboot` 重启。之后,运行 `nvidia-smi` 输出 Nvidia 驱动信息:
```
nvidia-smi
Fri Apr 17 07:31:55 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:01:00.0 Off | N/A |
| N/A 40C P8 5W / N/A | 263MiB / 5934MiB | 3% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1560 G /usr/lib/xorg/Xorg 144MiB |
| 0 1726 G /usr/bin/gnome-shell 76MiB |
| 0 2063 G ...uest-channel-token=10544833948196615517 39MiB |
+-----------------------------------------------------------------------------+
```
> 如果安装 CUDA Toolkit ,请先了解 CUDA Compatibility: https://docs.nvidia.com/deploy/cuda-compatibility/index.html 。安装 CUDA Toolkit 时,注意其携带的驱动版本,最好将其与驱动分别进行安装。而驱动从官方上直接找合适的版本。
### 安装 Docker
```
# update the apt package index
sudo apt-get update
# install packages to allow apt to use a repository over HTTPS
sudo apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common
# add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# set up the stable repository
sudo add-apt-repository \
"deb [arch=amd64] https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/ubuntu \
$(lsb_release -cs) \
stable"
# update the apt package index
sudo apt-get update
# install the latest version of Docker Engine and containerd
sudo apt-get install docker-ce docker-ce-cli containerd.io
```
之后,将 Docker 设为 non-root 用户可用:
```
sudo groupadd docker
sudo usermod -aG docker $USER
```
#### 参考
- Install Docker Engine on Ubuntu
https://docs.docker.com/engine/install/ubuntu/
- Docker CE 清华源
https://mirrors.tuna.tsinghua.edu.cn/help/docker-ce/
- Post-installation steps for Linux
https://docs.docker.com/engine/install/linux-postinstall/
### 安装 nvidia-container-toolkit
```
# add the package repositories
# in ubuntu 20.10 will failed <Unsupported distribution>, try https://github.com/NVIDIA/nvidia-docker/issues/1407
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
sudo systemctl restart docker
```
#### 測試
```
#### Test nvidia-smi with the latest official CUDA image
docker run --gpus all nvidia/cuda:10.0-base nvidia-smi
# Start a GPU enabled container on two GPUs
docker run --gpus 2 nvidia/cuda:10.0-base nvidia-smi
# Starting a GPU enabled container on specific GPUs
docker run --gpus '"device=1,2"' nvidia/cuda:10.0-base nvidia-smi
docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:10.0-base nvidia-smi
# Specifying a capability (graphics, compute, ...) for my container
# Note this is rarely if ever used this way
docker run --gpus all,capabilities=utility nvidia/cuda:10.0-base nvidia-smi
$ docker run --gpus all nvidia/cuda:10.2-base-ubuntu16.04 nvidia-smi
Unable to find image 'nvidia/cuda:10.2-base-ubuntu16.04' locally
10.2-base-ubuntu16.04: Pulling from nvidia/cuda
976a760c94fc: Pull complete
c58992f3c37b: Pull complete
0ca0e5e7f12e: Pull complete
f2a274cc00ca: Pull complete
708a53113e13: Pull complete
7dde2dc03189: Pull complete
2d21d4aba891: Pull complete
Digest: sha256:1423b386bb4f950d12b3b0f3ad51eba42d754ee73f8fc4a60657a1904993b68c
Status: Downloaded newer image for nvidia/cuda:10.2-base-ubuntu16.04
Fri Apr 24 08:17:26 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.82 Driver Version: 440.82 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 Off | 00000000:01:00.0 Off | N/A |
| N/A 38C P8 10W / N/A | 523MiB / 5934MiB | 21% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
#### 参考
- nvidia-docker: https://github.com/NVIDIA/nvidia-docker
- nvidia/cuda: https://hub.docker.com/r/nvidia/cuda
## 準備 Docker 映象
首先,準備 Docker ,請見:[Docker: Nvidia Driver, Nvidia Docker 推薦安裝步驟](https://mp.weixin.qq.com/s/fOjWV5TUAxRF5Mjj0Y0Dlw) 。
之後,開始準備映象,從下到上的層級為:
- nvidia/cuda: https://hub.docker.com/r/nvidia/cuda
- OpenCV: https://github.com/opencv/opencv
- Darknet: https://github.com/AlexeyAB/darknet
### nvidia/cuda
準備 Nvidia 基礎 CUDA 映象。這裡我們選擇 CUDA 10.2 ,不用最新 CUDA 11,因為現在 PyTorch 等都還都是 10.2 呢。
拉取映象:
```
docker pull nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04
```
測試映象:
```
docker run --gpus all nvidia/cuda:10.2-cudnn7-devel-ubuntu18.04 nvidia-smi
Sun Aug 8 00:00:00 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:07:00.0 On | N/A |
| 0% 48C P8 14W / 300W | 340MiB / 11016MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... Off | 00000000:08:00.0 Off | N/A |
| 0% 45C P8 19W / 300W | 1MiB / 11019MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
### OpenCV
基於 nvidia/cuda 映象,構建 OpenCV 的映象:
```
cd docker/ubuntu18.04-cuda10.2/opencv4.4.0/
docker build \
-t joinaero/ubuntu18.04-cuda10.2:opencv4.4.0 \
--build-arg opencv_ver=4.4.0 \
--build-arg opencv_url=https://gitee.com/cubone/opencv.git \
--build-arg opencv_contrib_url=https://gitee.com/cubone/opencv_contrib.git \
.
```
其 Dockerfile 可見這裡: https://github.com/ikuokuo/start-yolov4/blob/master/docker/ubuntu18.04-cuda10.2/opencv4.4.0/Dockerfile
### Darknet
#### 編輯 Darkfile
依據系統編輯 *docker/ubuntu18.04-cuda10.2/opencv4.4.0/darknet/Darkfile*:
mac:
不用編輯。
其 Dockerfile 可見這裡: https://github.com/ikuokuo/start-yolov4/blob/master/docker/ubuntu18.04-cuda10.2/opencv4.4.0/darknet/Dockerfile 。
ubuntu:
改成以下
```
# syntax=docker/dockerfile:experimental
FROM joinaero/ubuntu18.04-cuda10.2:opencv4.4.0
LABEL maintainer="neslxzhen@gmail.com" \
joinaero.release-date="2020-08-10" \
joinaero.version="0.1.0"
ARG cmake_url=https://github.com/Kitware/CMake/releases/download/v3.18.1/cmake-3.18.1-Linux-x86_64.sh
ARG darknet_url=https://github.com/AlexeyAB/darknet.git
RUN mv /etc/apt/sources.list.d/*.list /home/ \
&& apt update && apt install -y curl \
\
&& cd /home/ && mkdir cmake/ \
&& curl -O -L ${cmake_url} \
&& sh cmake-*.sh --prefix=/home/cmake --skip-license \
&& rm cmake-*.sh \
\
&& mv /home/*.list /etc/apt/sources.list.d/ \
&& apt remove -y curl && apt autoremove -y \
&& rm -rf /var/lib/apt/lists/*
RUN mv /etc/apt/sources.list.d/*.list /home/ \
&& apt update && apt install -y build-essential git \
\
&& cd /home/ \
&& git clone --depth 1 ${darknet_url} \
\
&& mv /home/*.list /etc/apt/sources.list.d/ \
&& apt remove -y git && apt autoremove -y \
&& rm -rf /var/lib/apt/lists/*
RUN echo "export PATH=/home/cmake/bin\${PATH:+:\${PATH}}" >> ~/.bashrc \
&& echo "export LD_LIBRARY_PATH=/home/opencv-4.4.0/lib:/usr/local/cuda/lib64/stubs\${LD_LIBRARY_PATH:+:\${LD_LIBRARY_PATH}}" >> ~/.bashrc
RUN cd /home/darknet/ \
&& make
WORKDIR /home/darknet
# ENTRYPOINT darknet
```
#### build image
基於 OpenCV 映象,構建 Darknet 映象:
```
cd docker/ubuntu18.04-cuda10.2/opencv4.4.0/darknet/
docker build \
-t joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet \
.
```
### end
上述映象已上傳 Docker Hub 。如果 Nvidia 驅動能夠支援 CUDA 10.2 ,那可以直接拉取該映象
```
docker pull joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
```
## 準備 COCO 資料集
MS COCO 2017 下載地址: http://cocodataset.org/#download
影像,包括:
- 2017 Train images [118K/18GB]
http://images.cocodataset.org/zips/train2017.zip
- 2017 Val images [5K/1GB]
http://images.cocodataset.org/zips/val2017.zip
- 2017 Test images [41K/6GB]
http://images.cocodataset.org/zips/test2017.zip
- 2017 Unlabeled images [123K/19GB]
http://images.cocodataset.org/zips/unlabeled2017.zip
標註,包括:
- 2017 Train/Val annotations [241MB]
http://images.cocodataset.org/annotations/annotations_trainval2017.zip
- 2017 Stuff Train/Val annotations [1.1GB]
http://images.cocodataset.org/annotations/stuff_annotations_trainval2017.zip
- 2017 Panoptic Train/Val annotations [821MB]
http://images.cocodataset.org/annotations/panoptic_annotations_trainval2017.zip
- 2017 Testing Image info [1MB]
http://images.cocodataset.org/annotations/image_info_test2017.zip
- 2017 Unlabeled Image info [4MB]
http://images.cocodataset.org/annotations/image_info_unlabeled2017.zip
## 用預訓練模型進行推斷
預訓練模型 *yolov4.weights*,下載地址 https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.weights 。
執行映象,其中`$DATASET_ROOT` (*$HOME/dataset*) 是空資料夾:
```
export CONTAINER_NAME=darknet
export DATASET_ROOT=$HOME/dataset
export TARGET_ROOT=/home/dataset
docker run -it --gpus all \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:/root/.Xauthority \
--name $CONTAINER_NAME \
--mount type=bind,source=$DATASET_ROOT,target=$TARGET_ROOT \
joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
```
將 *yolov4.weights* 及 *000000001123.jpg* 放入 `$DATASET_ROOT` (*$HOME/dataset*) 後,使用預設的 cfg 進行推斷:
```
export TARGET_ROOT=/home/dataset
mkdir $TARGET_ROOT/output
./darknet detector test cfg/coco.data cfg/yolov4.cfg $TARGET_ROOT/yolov4.weights \
-thresh 0.25 -ext_output -show -out $TARGET_ROOT/output/result.json \
$TARGET_ROOT/000000001123.jpg
```
推斷結果:


## 訓練自己的模型並推斷
### 準備 COCO 資料子集
MS COCO 2017 資料集有 80 個物體標籤。我們從中選取自己關注的物體,重組個子資料集。
- delete darknet container
```
export CONTAINER_NAME=darknet
docker container stop $CONTAINER_NAME
docker container rm $CONTAINER_NAME
```
- mount *cfg/*,方便我們自訂 cfg。`$HOST_CFG_Folder` (*$HOME/dataset/cfg*) 是自己建的空資料夾
```
export CONTAINER_NAME=darknet
export DATASET_ROOT=$HOME/dataset
export TARGET_ROOT=/home/dataset
export HOST_CFG_Folder=$HOME/dataset/cfg
export CFG_Folder=/home/dataset/cfg
docker run -it --gpus all \
-e DISPLAY \
-e QT_X11_NO_MITSHM=1 \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-v $HOME/.Xauthority:/root/.Xauthority \
--name $CONTAINER_NAME \
--mount type=bind,source=$DATASET_ROOT,target=$TARGET_ROOT \
--mount type=bind,source=$HOST_CFG_Folder,target=$CFG_Folder \
joinaero/ubuntu18.04-cuda10.2:opencv4.4.0-darknet
```
- 獲取樣例程式碼:
```
git clone https://github.com/ikuokuo/start-yolov4.git
```
- *scripts/coco2yolo.py*: COCO 資料集轉 YOLO 資料集的指令碼
- *scripts/coco/label.py*: COCO 資料集的物體標籤有哪些
- *cfg/coco/coco.names*: 編輯我們想要的那些物體標籤
- 在 host 上,將 *start-yolov4/cfg* 資料夾複製到 *$HOME/dataset* 中
- 準備資料集,`$OUT_DIR` (*$HOME/dataset/_MyDataset*) 是自己建的空資料夾:
```
cd start-yolov4/
pip install -r scripts/requirements.txt
export DATASET_ROOT=$HOME/dataset
export TARGET_ROOT=/home/dataset
export OUT_DIR=$DATASET_ROOT/_MyDataset
export HOST_CFG_Folder=$HOME/dataset/cfg
# train
python3 scripts/coco2yolo.py \
--coco_img_dir $DATASET_ROOT/train2017/ \
--coco_ann_file $DATASET_ROOT/annotations_trainval2017/annotations/instances_train2017.json \
--yolo_names_file $HOST_CFG_Folder/coco/coco.names \
--output_dir $OUT_DIR/coco2017/ \
--output_name train2017 \
--output_img_prefix $OUT_DIR/coco2017/train2017/
# valid
python3 scripts/coco2yolo.py \
--coco_img_dir $DATASET_ROOT/val2017/ \
--coco_ann_file $DATASET_ROOT/annotations_trainval2017/annotations/instances_val2017.json \
--yolo_names_file $HOST_CFG_Folder/coco/coco.names \
--output_dir $OUT_DIR/coco2017/ \
--output_name val2017 \
--output_img_prefix $OUT_DIR/coco2017/val2017/
```
資料集,內容如下:
```
~/dataset/_MyDataset/
├── train2017/
│ ├── 000000000071.jpg
│ ├── 000000000071.txt
│ ├── ...
│ ├── 000000581899.jpg
│ └── 000000581899.txt
├── train2017.txt
├── val2017/
│ ├── 000000001353.jpg
│ ├── 000000001353.txt
│ ├── ...
│ ├── 000000579818.jpg
│ └── 000000579818.txt
└── val2017.txt
```
- 編輯 *train2017.txt* 及 *val2017.txt*:
將 */home/graph/dataset/_MyDataset/coco2017/train2017/000000032773.jpg* (以`$DATASET_ROOT`開頭)改為 */home/dataset/_MyDataset/coco2017/train2017/000000032773.jpg* (以`$TARGET_ROOT`開頭)
### 準備必要檔案
在 `$HOST_CFG_Folder` (*$HOME/dataset/cfg*) 中:
- *coco/coco.names* <*coco/coco.names.bak* has original 80 objects>
- Edit: keep desired objects
- *coco/yolov4.cfg* <*coco/yolov4.cfg.bak* is original file>
- changed following:
- `batch`=64, `subdivisions`=64 <32 for 8-12 GB GPU-VRAM>
- `width`=128, `height`=128 <any value multiple of 32>
- `classes`=<your number of objects in each of 3 [yolo]-layers>
- `max_batches`=<classes*2000, but not less than number of training images and not less than 6000>
- `steps`=<max_batches0.8, maxbatches0.9>
- filters=<(classes+5)x3, in the 3 [convolutional] before each [yolo] layer>
- ~~filters=<(classes+9)x3, in the 3 [convolutional] before each [Gaussian_yolo] layer>~~
- *coco/coco.data*
- 如果你的資料集放在 */home/dataset/_MyDataset*,檔案改為 :
```
classes=80
train=/home/dataset/_MyDataset/coco2017/train2017.txt
valid=/home/dataset/_MyDataset/coco2017/val2017.txt
names=/home/dataset/cfg/coco/coco.names
backup=/home/dataset/backup
eval=coco
```
在 `$TARGET_ROOT` (*/home/dataset*) 中準備以下:
- csdarknet53-omega.conv.105
- Download [csdarknet53-omegafinal.weights](https://drive.google.com/file/d/18jCwaL4SJ-jOvXrZNGHJ5yz44g9zi8Hm)
- then run:
```
export TARGET_ROOT=/home/dataset
./darknet partial cfg/csdarknet53-omega.cfg $TARGET_ROOT/csdarknet53-omega_final.weights $TARGET_ROOT/csdarknet53-omega.conv.105 105
```
### 訓練自己的模型
- 進行訓練:
```
export CFG_Folder=/home/dataset/cfg
export BACKUP_Folder=/home/dataset/backup
export TARGET_ROOT=/home/dataset
mkdir $BACKUP_Folder
# Training command
./darknet detector train $CFG_Folder/coco/coco.data $CFG_Folder/coco/yolov4.cfg $TARGET_ROOT/csdarknet53-omega.conv.105 -map -dont_show
```
中途可以中斷訓練,然後這樣繼續:
```
export CFG_Folder=/home/dataset/cfg
export BACKUP_Folder=/home/dataset/backup
# Continue training
./darknet detector train $CFG_Folder/coco/coco.data $CFG_Folder/coco/yolov4.cfg $BACKUP_Folder/yolov4_last.weights -map -dont_show
```
*yolov4last.weights* 每迭代 100 次,會被記錄。
如果多 GPU 訓練,可以在 1000 次迭代後,加引數 `-gpus 0,1` ,再繼續:
```
export CFG_Folder=/home/dataset/cfg
export BACKUP_Folder=/home/dataset/backup
# How to train with multi-GPU
# 1. Train it first on 1 GPU for like 1000 iterations
# 2. Then stop and by using partially-trained model `$BACKUP_Folder/yolov4_1000.weights` run training with multigpu
./darknet detector train $CFG_Folder/coco/coco.data $CFG_Folder/coco/yolov4.cfg $BACKUP_Folder/yolov4_1000.weights -gpus 0,1 -map
```
訓練過程,記錄如下:

加引數 `-map` 後,上圖會顯示有紅線 `mAP`。
檢視模型 mAP@IoU=50 精度:
```
export CFG_Folder=/home/dataset/cfg
export BACKUP_Folder=/home/dataset/backup
./darknet detector map $CFG_Folder/coco/coco.data $CFG_Folder/coco/yolov4.cfg $BACKUP_Folder/yolov4_final.weights
...
Loading weights from $BACKUP_Folder/yolov4_final.weights...
seen 64, trained: 384 K-images (6 Kilo-batches_64)
Done! Loaded 162 layers from weights-file
calculation mAP (mean average precision)...
Detection layer: 139 - type = 27
Detection layer: 150 - type = 27
Detection layer: 161 - type = 27
160
detections_count = 745, unique_truth_count = 190
class_id = 0, name = train, ap = 80.61% (TP = 142, FP = 18)
for conf_thresh = 0.25, precision = 0.89, recall = 0.75, F1-score = 0.81
for conf_thresh = 0.25, TP = 142, FP = 18, FN = 48, average IoU = 75.31 %
IoU threshold = 50 %, used Area-Under-Curve for each unique Recall
mean average precision (mAP@0.50) = 0.806070, or 80.61 %
Total Detection Time: 4 Seconds
```
- CUDA Error: out of memory: File exists
- 把[#準備必要檔案](#準備必要檔案) 重新看一遍,*yolov4.cfg* 有沒有按照指示設對?
- 進行推斷:
```
export CFG_Folder=/home/dataset/cfg
export BACKUP_Folder=/home/dataset/backup
./darknet detector test $CFG_Folder/coco/coco.data $CFG_Folder/coco/yolov4.cfg $BACKUP_Folder/yolov4_final.weights \
-ext_output -show $TARGET_ROOT/000000001123.jpg
```
推斷結果:

## 參考內容
- [Train Detector on MS COCO (trainvalno5k 2014) dataset](https://github.com/AlexeyAB/darknet/wiki/Train-Detector-on-MS-COCO-(trainvalno5k-2014)-dataset)
- [How to evaluate accuracy and speed of YOLOv4](https://github.com/AlexeyAB/darknet/wiki/How-to-evaluate-accuracy-and-speed-of-YOLOv4)
- [How to train (to detect your custom objects)](https://github.com/AlexeyAB/darknet#how-to-train-to-detect-your-custom-objects)