###### tags: `GAS-GCN` # Gated Action-Specific Graph Convolutional Networks for Skeleton-Based Action Recognition ## About the paper - [read](https://www.mdpi.com/1424-8220/20/12/3499) - author: Wensong Chan ,Zhiqiang Tian, and Yang Wu - published: 2020/6/21 - contributin - ASGCN: combine structural and implicit edge - Gated CNN: filter out useless temporal information - Dataset: - NTU-RGB + D 120 - 南洋理工大學蒐集的資料集 - RGB videos: 1920x1080 - depth map videos: 512*424 - IR videos (紅外線): 512*424 - ==3D== skeletal data: 25 個關節 - [download](https://github.com/shahroudy/NTURGB-D/tree/master) - Kinetics - 主要來自Youtube - 分類: person, person-person, person-object - 至少600個影片,一個影片10秒 - 有label - [download](https://k4kan.medium.com/ntu-rgb-d-%E8%B3%87%E6%96%99%E9%9B%86%E4%BB%8B%E7%B4%B9-718e45554a61) - [detail](https://zhuanlan.zhihu.com/p/347490726) - Experiments Details - PyTorch: implement GAS-GCN - SGD: optimization strategy - 2 NVIDIA 2080TI GPU with memory of 11GB - NTU-RGB + D training - T=300 - epoch=50, base learning rate: - 1~29: 0.1 - 30~40: 0.1 x 0.1 - 41~50: 0.1 x 0.1 x 0.1 - batch size=28 - Kinetics - T=150 - epoch=65, base learning rate: - 1~44: 0.1 - 45~55: 0.1 x 0.1 - 56~65: 0.1 x 0.1 x 0.1 - batch size=64 - 𝜇=0.5 ## GAS-GCN - Gated Action-Specific Graph Convolutional Networks - Why - GCNs-based only focus on ajencent matrix (只有附近的joint,但是每個動作不一定只影響到一個關節附近) - The connection between two joints is structural edge - solution: action-specific graph convolutional module (ASGCM) - generate the implicit edge - decide the ratio of the combination of structural edge and implicit edges according to different action - ==GAS-GCN== - Skeleton-based recognition depends on the contexts and long-range dependencies in temporal dimension - 但不是所有資訊都會用到action recognition上 - solution: Gated mechanism - control the information flowing in RNN - operates the time dimension - ## Source code - [Tian Lab](https://github.com/Tian-lab/gas-gcn/tree/master) - [STA-GCN](https://www.semanticscholar.org/paper/Spatial-Temporal-Attention-Graph-Convolutional-with-Shiraki-Hirakawa/4280ab3111c6d4922a832f11f7a073afe062736d) - [STA-GCN paper](https://openaccess.thecvf.com/content/ACCV2020/papers/Shiraki_Spatial_Temporal_Attention_Graph_Convolutional_Networks_with_Mechanics-Stream_for_Skeleton-based_ACCV_2020_paper.pdf) -
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up