Vehicle ReID

@Lab607

NTULab: Vehicle ReID

Public team

Community (0)
No community contribution yet

Joined on Jul 21, 2020

  • knowledge 數學符號: ^: head : Domain adaptation (DA):
     Like  Bookmark
  • 作用 Gram矩陣是兩兩向量的內積組成,所以Gram矩陣可以反映出該組向量中各個向量之間的某種關係 Gram matrix的應用-風格遷移 分別計算兩個圖像的特徵向量的Gram矩陣,以兩個圖像的Gram矩陣的差異最小化為優化目標
     Like  Bookmark
  •  Like  Bookmark
  •  Like  Bookmark
  • Login mail: password: Jimmy Chen下午3:49 https://openaccess.thecvf.com/CVPR2021 Jimmy Chen下午3:57 http://cvpr2021.thecvf.com/node/179 Jimmy Chen下午4:03
     Like  Bookmark
  • 合成霧方法: Koschmieder model [18] optical flow:optical flow basic assumptions such as brightness and gradient constancy.(圖像的運動訊息) This degradation breaks the Brightness Constancy Constraint (BCC) and Gradient Constancy Constraint (GCC) used in existing optical flow methods. unknow
     Like  Bookmark
  • DataSet Tip GEM: Generalized Mean Pooling backbone Instance Batch Normalization (IBN) resize lr
     Like  Bookmark
  • unknow minimax game [11]. model collapse in unsupervised training, the generator may map most data of source domain to a few samples of target domain, which makes the generated results lack of diversity. This phenomenon is also called model collapse.
     Like  Bookmark
  • InfoMax-GAN: Improved Adversarial Image Generation via Information Maximization and Contrastive Learning https://openaccess.thecvf.com/content/WACV2021/papers/Lee_InfoMax-GAN_Improved_Adversarial_Image_Generation_via_Information_Maximization_and_Contrastive_WACV_2021_paper.pdf unknow Note that variational的思想就是針對某個分佈很難求解的時候,採用另外一個分佈來近似這個分佈的做法,並使用變分信息最大化(論文:The IM algorithm: A variational approach to information maximization)的方法求解變分下界 https://blog.csdn.net/winycg/article/details/105297089 tip
     Like  Bookmark
  • Knowledge Distillation 由teacher模型先訓練好權重後,再抽取(蒸餾)精華作為student模型的訓練教材,這個晢取出的精華是指訓練好的參數權重。 Self-Supervised https://zhuanlan.zhihu.com/p/125721565 SSL is equivalent to supervised learning with pseudo labels 自監督學習能避免註釋大型數據集帶來的成本,即採用自定義pseudo-labels作為監督
     Like  Bookmark
  • An image is worth 16x16 words: Transformers for image recognition at scale (vit) Unknow layer norm: multi-head self-attention: multi-head attention則是通過h個不同的線性變換對Q,K,V進行投影,最後將不 同的attention結果拼接起來
     Like  Bookmark
  • 總結 Self-supervised + attentive Outlone paper name Discovering Discriminative Geometric Features with Self-Supervised Attention for Vehicle Re-Identification and Beyond The Devil is in the Details: Self-Supervised Attention for Vehicle Re-Identification ((ECCV) 2020) Cluster Contrast for Unsupervised Person Re-Identification
     Like  Bookmark
  • Yeh [x] Real World Dataset: Veriwild Syn for \beta 0.6-2, \alpha 0.7-1 * 分資料: train: image_train_synreal , gallery: image_test_clear, query: image_query_hazy * VERIWILD_syn * 數量: - image_train_synreal: 277797 - image_test_clear : 8301 - image_query_hazy : 545
     Like  Bookmark