# Zero Shot Learning Note
## Summary
1. different dataset: with ground truth embedding label
2. biggest problem: domain swift, hubness problem, semantic gap[2]
3. Solutions for domain swift
* SAE (visual feature <=reconstruct=> semantic)
4. Transductive
* Q: projection domain shift problem [2]
A1: Semantic Autoencoder for ZSL[3]
* TODO: add details
* pros: save all feature value
* cons: 泛化度不够
A2:生成模型(GAN)
5. DAP(attribute classify acc high but class classify accuracy low) vs IAP
## Idea
1. extra semantic vector
1. method
* mapping visual feature to semanic space which constrained and added by current attribute label (the fisrt n dim will be compared with current attribution label and calculated constructive loss)
* detached the generator and extend it to every class
* adding a binary classifier
2. anlysis
- Pros:
* adding extra information which enlarge the intra-class variance
- Cons:
* classifier is hard to train
2. GAN based method
1. method
* generate visual sample which conditioned by semantic attribute from specific classes
* imposing varianiational inference to the visual feature to distentangling the intra-class variance
* enourage variance of all class following same distribution(both original and generated features)
* during testing stage, same with paper[4]
## Reference
1. GCAN: Graph Convolutional Adversarial Network for Unsupervised Domain Adaptation
2. Transductive Multi-View ZSL
3. Semantic Autoencoder for Zero-Shot Learning
4. Leveraging the Invariant Side of Generative Zero-Shot Learning