# YOLOX: Exceeding YOLO Series in 2021 > [name=JayHsu][time=Thu, Nov 18, 2021 2:19 PM] [TOC] ## Install ```shell= git clone git@github.com:Megvii-BaseDetection/YOLOX.git cd YOLOX pip3 install -U pip && pip3 install -r requirements.txt python3 setup.py develop ``` ## Inference 1. 把install時下載的YOLOX/yolox資料夾copy至專案資料夾 ```shell== scp -r YOLOX/yolox face_detect/ ``` 2. Import相關package ```python== import torch from yolox.data.data_augment import ValTransform from yolox.data.datasets import COCO_CLASSES from yolox.utils import postprocess as yolox_postprocess ``` 3. Load model ```python= exp = yolox_s_Exp() model = exp.get_model() model.cuda(); model.eval(); ckpt_file = 'pretrain/yolox_s.pth' ckpt = torch.load(ckpt_file, map_location="cpu") model.load_state_dict(ckpt["model"]) ``` 4. Inference ```python= img = img.cuda() outputs = self.model(img) outputs = yolox_postprocess( outputs, self.num_classes, self.confthre, self.nmsthre, class_agnostic=True) ``` 5. Sample Code: http://10.109.6.14:9999/notebooks/Jay/face_detect/yoloX_Inference.ipynb ## Training on custom data 1. 準備dataset, 必須是COCO format 資料夾必須符合固定格式 - dataset folder - annotation folder - train_label.json - val_label.json - train img folder - val img folder  2. 準備設定檔(EXP file) 根據選擇的model, 使用對應的模板修改 - data_dir為dataset folder位置 - train_ann: tr annotations (必須放在annotation folder內) - val_ann: val annotations (必須放在annotation folder內) - tr_img_folder: tr images - val_img_folder: val images - self.num_classes: number of classes ```python= self.data_dir = '/mnt/hdd1/Data/YOLOX/coco128' self.train_ann = "instances_train2017.json" self.val_ann = "instances_val2017.json" self.tr_img_folder = 'images' self.val_img_folder = 'images' ``` 設定檔模板: http://10.109.6.14:9999/edit/Jay/YOLOX/exps/test/yolox_s.py 3. Train ```shell= #cd /home/tpe-aa-02/AA/Jay/YOLOX cd /mnt/hdd1/AA/Jay/YOLOX export PYTHONPATH="${PYTHONPATH}:/mnt/hdd1/AA/Jay/YOLOX" python3 tools/train.py -f exps/test/yolox_s.py -d 1 -b 32 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_s.pth #start from latest checkpoint python3 tools/train.py -f exps/test/yolox_s.py -d 1 -b 32 --fp16 -o -c /home/tpe-aa-02/AA/Jay/YOLOX/YOLOX_outputs/yolox_s/last_epoch_ckpt_tmp.pth #TWCC python3 tools/train.py -f exps/test/yolox_m.py -d 1 -b 48 --fp16 -o -c pretrain/yolox_m.pth #DCT python3 tools/train.py -f exps/test/yolox_s_dct.py -d 1 -b 32 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_s.pth python3 tools/train.py -f exps/test/yolox_tiny_dct.py -d 1 -b 32 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_tiny.pth python3 tools/train.py -f exps/test/nano_dct.py -d 1 -b 32 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_nano.pth python3 tools/train.py -f exps/test/yolox_m_dct.py -d 1 -b 32 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_m.pth #IO Location python3 tools/train.py -f exps/test/yolox_s_ioL.py -d 1 -b 8 --fp16 -o -c /mnt/hdd1/Model/YOLOX/pretrain/yolox_s.pth #Flower python3 tools/train.py -f exps/test/yolox_s_flower.py -d 1 -b 32 --fp16 -o -c /home/tpe-aa-02/AA/Jay/YOLOX/YOLOX_outputs/yolox_s_flower/best_ckpt.pth ``` -f: 設定檔位置 -c: pretarin model位置 -d: 用幾張GPU -b: batch size -0: occupy GPU memory first for training 4. training後的model儲存位置 ```shell= YOLOX/YOLOX_outputs ``` ## Appendix A - YOLOX Performance - 官方數據  ## Appendix B - Error Fix **1. numpy.float64' object cannot be interpreted as an integer** ```= #training時出現以下error TypeError: 'numpy.float64' object cannot be interpreted as an integer ``` 參考issue解決方法: https://github.com/Megvii-BaseDetection/YOLOX/issues/435 ./.local/lib/python3.6/site-packages/pycocotools/cocoeval.py **2. out of memory** 在training中修改以下設定 -d 1 -b 8 -d 1: 只用一張GPU -b 8: batch size 8 **3. ModuleNotFoundError: No module named 'yolox'** ```shell= #!export PYTHONPATH="${PYTHONPATH}:/home/tpe-aa-02/AA/Jay/YOLOX" !export PYTHONPATH="${PYTHONPATH}:/mnt/hdd1/AA/Jay/YOLOX" ```  ## Appendix C: training config 1. 選擇GPU  2. 使用多張GPU卡  ## Appendix D: 標注格式轉換 1. VOC to COCO - https://blog.csdn.net/m_buddy/article/details/90348194 - http://10.109.6.14:9999/notebooks/x2coco/voc2coco.ipynb 2. YOLO 2 COCO - http://10.109.6.14:9999/notebooks/x2coco/yolo2coco.ipynb ## Appendix E: 標注檔合併 - http://10.109.6.14:9999/notebooks/x2coco/combine_json.ipynb#Combine ```python= # 1. Read Face & Person JSON # 2. Combine Categories # 3. Combine Annotation # 4. Combin Images # 5. Save Annotation ``` ## Appendix E: Raspberry pi tips - 安裝pytorch - https://blog.csdn.net/phker/article/details/118190816 - https://torch.kmtea.eu/whl/stable-cn.html - onnx export - https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ONNXRuntime ```shell= python3 tools/export_onnx.py --output-name yolox_nano_dct.onnx -f exps/test/nano_dct.py -c YOLOX_outputs/nano_dct/best_ckpt.pth ``` - 轉ncnn - https://github.com/Megvii-BaseDetection/YOLOX/tree/main/demo/ncnn/cpp - https://zhuanlan.zhihu.com/p/391788686 - python執行ncnn inference - 參考yolov5的作法: https://github.com/Tencent/ncnn/blob/master/python/ncnn/model_zoo/yolov5.py - FPS Test  - Build ncnn yolox executable file - https://blog.csdn.net/qq_39056987/article/details/119258569 - 編輯CMakeLists.txt ```shell= vi /home/tpe-aa-01/AA/Jay/ncnn/examples/CMakeLists.txt 加上 ncnn_add_example(yolox) ```  - 編輯yolox.cpp - 更改model位置 - vulkan改成false ```shell= vi /home/tpe-aa-01/AA/Jay/ncnn/examples/yolox.cpp ```  - 開始編譯 ```shell= cd /home/tpe-aa-01/AA/Jay/ncnn/build cmake .. make ``` - 產出: yolox執行檔 ```shell= ls /home/tpe-aa-01/AA/Jay/ncnn/build/examples/yolox ``` ## Reference 1. pretrain weight: https://github.com/Megvii-BaseDetection/YOLOX 2. paper: https://arxiv.org/abs/2107.08430
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up