# Detection model evaluation ###### tags: `cv_infra` ## Object detection ### Training 強烈覺得要Visualize Visualize工具: 1. wandb 2. tensorboard 3. visdom 研發雲只有tensorboard,但tensorboard好像不能顯示真實圖檔?只能有數字圖? wandb範例  每個epoch train/val - obj_loss - class_loss - giou_loss - outputs image - map們 - pixel accuracy (segmentation) https://github.com/ultralytics/yolov5/blob/master/tutorial.ipynb https://www.jeremyjordan.me/evaluating-image-segmentation-models/ (segmentation) https://towardsdatascience.com/metrics-to-evaluate-your-semantic-segmentation-model-6bcb99639aa2 (segmentation) ### Testing ``` Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.479 # <--- prune mAP Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.673 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.525 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.326 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.531 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.623 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.368 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.603 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.652. # <--- prune mAR Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.490 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.705 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.793 ```  https://github.com/ultralytics/yolov5 https://github.com/ultralytics/yolov5/issues/304
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up