# MMAction2 https://github.com/open-mmlab/mmaction2 https://github.com/open-mmlab/mmaction2/blob/main/demo/README.md https://mmaction2.readthedocs.io/zh-cn/latest/get_started/installation.html https://congee524-mmaction2.readthedocs.io/zh-cn/latest/demo.html --- ### DEMO測試 1. conda activate openmmlab 2. cd D:\mmaction2 3. python demo/demo_inferencer.py demo/zelda.mp4 --rec tsn --print-result --vid-out-dir zelda_out --label-file tools/data/kinetics/label_map_k400.txt ### 數據集瀏覽 * python tools/visualizations/browse_dataset.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py --output-dir browse_out --mode pipeline ### 訓練 * python tools/train.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py ->調整訓練次數(epoch) ### 驗證 * python tools/test.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py work_dirs/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy/best_acc_top1_epoch_1.pth ### 數據集位置 * D:\mmaction2\data\finger(單一手勢) * D:\mmaction2\data\action(數據集改連貫性動作) --- ### 準備數據集 >數據集需放在MMACTION2/data下 >(train/val) >將數據集轉換為現有的數據集格式(txt),有三種 * **幀標註(rawframe annotation)** ->分別是rawframe directory of relative path/total frames/label dataset_type = 'RawframeDataset' ![upload_32c255fa2afe603210b3cd78cb774e67](https://hackmd.io/_uploads/HJEGu7390.png) * **視訊標註(video annotation)** ->分別是filepath of relative path/label dataset_type = 'VideoDataset' ![upload_a2bdc299c31c4015642818453bcb589f](https://hackmd.io/_uploads/SkaMOmhqR.png) * **ActivityNet 標註** ->標註檔是一個json檔,每個鍵是一個視頻名 ![upload_3525a1587e241ba16e459035e5dc2fcc](https://hackmd.io/_uploads/rJ4Xu72cR.png) --- ### 模型推理(+camera) * python demo/demo_test.py D:/mmaction2/camera/recorded_video.avi --rec work_dirs/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py --print-result --vid-out-dir zelda_out --- ### 實時動作識別 **1. DEMO** * python demo/webcam_demo.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb.py tsn_imagenet-pretrained-r50_8xb32-1x1x8-100e_kinetics400-rgb_20220906-2692d16c.pth tools/data/kinetics/label_map_k400.txt --average-size 1 --threshold 0.8 **2. Train** * python demo/webcam_demo.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py work_dirs/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy/epoch_50.pth finger.txt --average-size 5 --threshold 0.5 * best:python demo/webcam_demo.py configs/recognition/tsn/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy.py work_dirs/tsn_imagenet-pretrained-r50_8xb32-1x1x3-100e_kinetics400-rgb_copy/finger_1/epoch_50.pth finger.txt --average-size 5 --threshold 0.5 **3. Spatio-Temporal Action** * python demo/webcam_demo_spatiotemporal_det.py --input-video 0 --config configs/detection/slowonly/slowonly_kinetics400-pretrained-r101_8xb16-8x8x1-20e_ava21-rgb.py --checkpoint https://download.openmmlab.com/mmaction/detection/ava/slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb/slowonly_omnisource_pretrained_r101_8x8x1_20e_ava_rgb_20201217-16378594.pth --det-config demo/demo_configs/faster-rcnn_r50_fpn_2x_coco_infer.py --det-checkpoint http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_bbox_mAP-0.384_20200504_210434-a5d8aa15.pth --det-score-thr 0.9 --action-score-thr 0.5 --label-map tools/data/ava/label_map.txt --predict-stepsize 40 --output-fps 65 --show