# YoloX
###### tags:`Object Detection` `python`
[TOC]
## Environment Setup
### Yolox repository
```
git clone https://github.com/willy541222/YOLOX.git
```
:::info
建議下載到沒有中文的路徑。
:::
Change to the YOLOX derictory.
```
cd YOLOX
````
### Pytorch
==It has to install torch first.==
According to your cuda version to download the corresponding version from pytorch official website.
* https://pytorch.org/
### Requirements
```
pip install -r requirements.txt
```
### Pycocotools
```
pip install pycocotools-windows
```
### Nvidia-apex
```
git clone https://github.com/NVIDIA/apex
```
Go to the apex directiory.
```
python setup.py install
```
### Download the weights
Download the weight from https://github.com/willy541222/YOLOX Benchmark part.
## YOLOX Folder structure
```
├── datasets #訓練資料集
├── assets #測試資料集
├── YOLOX_outputs #輸出 (辨識圖片結果、權重、log紀錄檔)
├── demo # 適應各平台的轉換工具
├── docs # YOLOX Github的文件說明檔
├── exps # 用於實驗的時候針對自己所要的規格撰寫的程式
├── tools # 用於展示辨識、訓練、評估
├── requirements.txt # 必須安裝的模組
├── Random_crop_data_iou.py # 對每張圖產生依據IOU概念的隨機N張圖片並儲存
├── create_single_object_data_iou.py # 對每張圖產生依據IOU概念的所有圖片並儲存
├── sliding_windows.py # 演示對單張圖片做滑動窗並儲存每張滑動窗圖
├── Resize_and_check_bboxes.py # 對XML的bboxes做尺寸縮放、檢查。
├── voc_txt.py # 產生訓練集及驗證集的工具
├── yolox # 主程式 (訓練、定義架構、資料處理、評估) 訓練及辨識時的Base libraries.
│ ├── core
│ ├── data # 處理資料的模組(資料增量、資料預處理等等)
│ ├── evaluators # 評估
│ ├── exp # 訓練及辨識時取用的基礎參數定義
│ ├── layers
│ ├── models
│ ├── utils # 訓練及辨識時用到的模組(可視化、產生紀錄檔、權重儲存載入、Bounding boxes等等)
```
## YOLOX Architecture
### Family Table
| Members | Depth | Width | Params |
| ------- | ----- | ----- | ---------- |
| x | 1.33 | 1.25 | 99,071,455 |
| l | 1 | 1 | 54,208,895 |
| m | 0.67 | 0.5 | 25,326,495 |
| s | 0.33 | 0.5 | 8,937,682 |
| Tiny | 0.33 | 0.375 | 5,055,855 |
| Nano | 0.33 | 0.25 | 896,754 |
Depth : 影響CSPDarknet的深度,Bottleneck的數量
```python=
base_depth = max(round(Depth * 3), 1)
```
Width : 影響CSPDarknet的輸出寬度
```python=
base_channels = int(Width * 64)
```
### Architecture Figure
Using Nano to represent.

### PAFPN
**Path Aggregation Feature Pyramid Network**

### Head

### Component

## Train Own Dataset
### File Type
The annotation files need ==xml== and the images need ==jpg==.
### Datasets
手動創建VOCdevkit、VOC2007、Annotations、JPEGImages、ImageSets、Main這些檔案夾在YOLOX/datasets/ 裡,檔案分佈圖如下
```
├── datasets
│ ├── VOCdevkit
│ │ ├── VOC2007
│ │ │ ├── Annotations #把xml檔案放在這
│ │ │ ├── JPEGImages #把圖片放在這
│ │ │ ├── ImageSets
│ │ │ │ ├── Main
```
執行以下檔案將會自動產生必須要的txt檔,檔案分佈圖如下
```
python voc_txt.py D:/YOLOX/datasets/VOCdevkit
```
```
├── datasets
│ ├── VOCdevkit
│ │ ├── VOC2007
│ │ │ ├── Annotations #把xml檔案放在這
│ │ │ ├── JPEGImages #把圖片放在這
│ │ │ ├── ImageSets
│ │ │ │ ├── Main
│ │ │ │ │ ├── test.txt # 10%
│ │ │ │ │ ├── trainval.txt # 90%
│ │ │ │ │ ├── train.txt # 80%
│ │ │ │ │ ├── val.txt # 10%
```
```graphviz
digraph hierarchy {
nodesep=1.0 // increases the separation between nodes
node [color=Red,fontname=Courier,shape=box] //All nodes will this shape and colour
edge [color=Blue, style=dashed] //All the lines look like this
Dataset->{trainval test}
trainval->{train val}
{rank=same;trainval test}
}
```
### Modify Training Params
==YOLOX/yolox/exp/yolox_base.py==
Inside the file is the initial training parameters include model, augmentation, training, testing, dataloader's configuration.
* The ==Depth and Width== of the model must to be modify for training corresponding weights.
| | Depth | Width |
| ---- | ----- | ----- |
| x | 1.33 | 1.25 |
| l | 1 | 1 |
| m | 0.67 | 0.5 |
| s | 0.33 | 0.5 |
| Nano | 0.33 | 0.25 |
| Tiny | 0.33 | 0.375 |
* Create the new expriment description file to define custom parameters.
For example look in the file : ==YOLOX/example/yolox_voc/yolox_voc_s.py==
### Train the custom dataset in model yolox_s.
```
python tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 1 -b 8 --fp16 -o -c ./yolox_s.pth
```
### Train the custom dataset in model yolo_nano.
```
python tools/train.py -f exps/example/yolox_voc_nano/yolox_voc_nano.py -d 1 -b 16 --fp16 -o -c ./yolox_nano.pth
```
### Keep training on last checkpoint.
```
python tools/train.py -f exps/example/yolox_landing_platform_nano/yolox_landing_platform_nano.py -d 1 -b 16 --fp16 -o -c ./YOLOX_outputs/yolox_voc_nano/best_ckpt.pth.tar
```
## Evaluation
```
python tools/eval.py -n yolox-tiny -c ./YOLOX_outputs/yolox_voc_s/latest_ckpt.pth.tar -b 64 -d 1 --conf 0.001 -f ./exps/example/yolox_voc/yolox_voc_s.py
```

```
python tools/eval.py -n yolox-nano -c ./YOLOX_outputs/yolox_voc_nano/latest_ckpt.pth.tar -b 64 -d 1 --conf 0.001 -f ./exps/example/yolox_voc_nano/yolox_voc_nano.py
```

## Run detection
### Image
```
python tools/demo.py image -f ./exps/example/yolox_voc/yolox_voc_s.py -c {MODEL_PATH} --path {TEST_IMAGE_PATH} --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu
```
| 參數 | 說明 |
| --------------- | ---------------------------- |
| MODEL_PATH | 已訓練的權重checkpoint檔路徑 |
| TEST_IMAGE_PATH | 要辨識的圖片路徑 |
* **yolox-s**
```
python tools/demo.py image -f ./exps/example/yolox_voc/yolox_voc_s.py -c YOLOX_outputs/yolox_voc_s/best_ckpt.pth.tar --path ./940.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu
```
* **yolox-nano**
```
python tools/demo.py image -f ./exps/example/yolox_voc_nano/yolox_voc_nano.py -c YOLOX_outputs/yolox_voc_nano/best_ckpt.pth.tar --path ./1067.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu
```
### Video
* **yolo-s**
```
python tools/demo.py video -f ./exps/example/yolox_voc/yolox_voc_s.py -c YOLOX_outputs/yolox_voc/best_ckpt.pth.tar --path ./DJI_0060.MP4 --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu
```
## Run Sliding Window Detection
* **yolox-s**
```
python tools/demo_sliding_window.py image -f ./exps/example/yolox_voc/yolox_voc_s.py -c YOLOX_outputs/yolox_voc_s/best_ckpt.pth.tar --path ./940.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu
```
* **yolox-Nano**
```
python tools/demo_sliding_window.py image -f ./exps/example/yolox_landing_platform_nano/yolox_landing_platform_nano.py -c YOLOX_outputs/yolox_landing_platform_nano/best_ckpt.pth.tar --path ./assets/landing_platform/ --conf 0.5 --nms 0.45 --tsize 640 --save_result --device gpu
```
## Visualize Train and Val Response
Change directory to the YOLOX_outputs/{logdir} and run the command below.
```
tensorboard --logdir=./
```


## Issues
### No module name yolox
Add the script in demo.py and train.py
```python=
import sys
sys.append(r'$Path/to/YOLOX')
```
### Install Nvidia-Apex failed in Windows
Go to the apex directory and run below script.
```
python setup.py install
```
### Pycocotools installation failed in Windows.
```
pip install pycocotools-windows
```
or
```
pip install pycocotools
```
:::info
If it is still failed check the Microsoft Visual Studio c++ version.
It is work fine with ==version 2017==.
:::
### AssertionError: Caught AssertionError in DataLoader worker process 0
此問題是找不到輸入的圖片。
:::warning
資料集的路徑裡不能有中文!
若OS系統是中文的建議把YOLOX建立在非系統槽。
:::
### No such file XXX.xml in evaluation.
Find the voc.py and change script.
```python
annopath = os.path.join(rootpath, "Annotations", "{:s}.xml")
```
to
```python
annopath = os.path.join(rootpath, "Annotations", "{}.xml")
```

* https://issueexplorer.com/issue/Megvii-BaseDetection/YOLOX/741
### Device '/dev/video0' is busy

```
fuser /dev/video0
```
```
ps axl | grep 13781
```
```
kill -9 13781
```
## Reference
* https://github.com/roboflow-ai/YOLOX.git
* https://blog.csdn.net/weixin_44457930/article/details/123009976
* https://github.com/Megvii-BaseDetection/YOLOX
* https://issueexplorer.com/issue/Megvii-BaseDetection/YOLOX/741
* https://bbs.cvmart.net/articles/5347
* https://bbs.cvmart.net/articles/5442
* https://blog.csdn.net/MacKendy/article/details/106310328
* [SimOTA](https://blog.csdn.net/weixin_44751294/article/details/125303278)