# Global and Local Mixture Consistency Cumulative Learning for Long-tailed Visual Recognitions 作者:Fei Du, Peng Yang, Qi Jia, Fengtao Nan, Xiaoting Chen, Yun Yang 論文連結:https://openaccess.thecvf.com/content/CVPR2023/papers/Du_Global_and_Local_Mixture_Consistency_Cumulative_Learning_for_Long-Tailed_Visual_CVPR_2023_paper.pdf 整理by: [chewei](https://hackmd.io/@WTuIbJANSB26DiAX-WL4Sg) ## 主要貢獻: * One-Stage training strategy : GLMC * 一個Loss: Global and Local mixture consistency Loss * 一個逐Epoch累計變化的Loss 參數 $\rightarrow$ 主要圍繞在Loss改進 ## 整體架構 * 首先以[1] BBN方式做出inverse sampler 使head->tail和tail->head 的sample相同,目的是為了要增強 Tail Class 的精確度: **Sample Function:** $$ P_i=\frac{w_i}{\sum^c_{j=1}w_j}, \\ w_i=\frac{N_{max}}{N_i} $$ ![image](https://hackmd.io/_uploads/HJOIvbWgC.png) * 接下來對此兩張照片一起做Global Mixture(MixUp),以及Local Mixture(CutMix) **Global Mixture Function:** $$ \begin{align*} & \lambda~Beta(\beta,\beta) \\ & \tilde{x}_g =\lambda x_i + (1-\lambda)x_j, \\ & \tilde{p}_g = \lambda p_i + (1 - \lambda)p_j. \end{align*} $$ **Local Mixture Function:** $$ \begin{align*} & \tilde{x}_l=\text{M}\odot x_i + (\text{1} -\text{M})\odot x_j. \\ \text{Bounding Box :} \\ & B=(r_x,r_y,r_w,r_h) \\ & r_x ~ Uniform(0,W),r_w = W\sqrt{1-\lambda} \\ & r_y ~ Uniform(0,H),r_h=H\sqrt{1-\lambda} \end{align*} $$ ![image](https://hackmd.io/_uploads/HyUPbEbe0.png) * 接著放進Model(ResNet32)後分別算出 * $L_{CE}(C(\tilde{x}),\tilde{y})$ * $L_{cb}(C(\tilde{x}),\tilde{w},\tilde{y})$ * $L_{sim}=sim(u_g,sg(h_l))+sim(u_l,sg(h_g))$ ##### **$L_{CE}$爲CrossEntropy Loss:** $$ L_{EC} =-\frac{1}{2N}\sum_{i=1}^{N}(\tilde{p}_{g}^{i}(logf(\tilde{x}_{g}^{i}))+\tilde{p}_{l}^{i}(logf(\tilde{x}_{l}^{i}))) $$ ##### **$L_{CE}$爲Rebalanced Loss:** $$ L_{cb} =-\frac{1}{2N}\sum_{i=1}^{N}\tilde{w}^i(\tilde{p}_{g}^{i}(logf(\tilde{x}_{g}^{i}))+\tilde{p}_{l}^{i}(logf(\tilde{x}_{l}^{i}))) $$ ##### **$L_{sim}$爲[2]SimSiam 所提出之 SimSiam Loss:** $$ L_{sim}=sim(u_g,sg(h_l))+sim(u_l,sg(h_g)) $$ :::success 結合方式爲: $$ L_{total}= \alpha L_{CE}+ (1-\alpha)L_{cb} + \gamma L_{sim} $$ $\gamma$經過消融實驗後設定爲10 $\alpha$計算方式爲: $$ \alpha = 1-(\frac{T}{T_{max}})^2 $$ ::: ![image](https://hackmd.io/_uploads/HykmIEWgC.png) --- 整體架構圖: ![image](https://hackmd.io/_uploads/B1UdLN-lC.png) Loss架構圖: ![image](https://hackmd.io/_uploads/SJWTLVbxC.png) ## Related Work stop-gradient simulate cosine ## 參考資料 [1] BBN [2] siamsim [BBN筆記](https://medium.com/@_Xing_Chen_/bbn-bilateral-branch-network-with-cumulative-learning-for-long-tailed-visual-recognition-%E8%AB%96%E6%96%87%E8%A9%B3%E7%B4%B0%E8%A7%A3%E8%AE%80-2491805342e4)