# Notes on "[Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation](http://proceedings.mlr.press/v119/liang20a/liang20a.pdf)"
###### tags: `notes` `unsupervised` `domain-adaptation`
Notes Author: [Rohit Lal](https://rohitlal.net/)
---
## Brief Outline
- This work tackles a practical setting where only a trained source model is available and investigates how we can effectively utilize such a model without source data to solve UDA problems
- propose a simple yet generic representation learning framework, named Source HypOthesis Transfer (SHOT)
## Introduction
- Existing DA methods need to access the source data during learning to adapt, which is not efficient for data transmission and may violate the data privacy policy.
- It differs from vanilla unsupervised DA in that the source model instead of the source data is provided to the unlabeled target domain.
- develop a Source HypOthesis Transfer (SHOT) framework by learning the domain specific feature encoding module while fixing the source classifier module (hypothesis), as the source hypothesis encodes the distribution information of unseen source data.
## Methodology
![](https://i.imgur.com/kJ2jsdj.png)
- generate the source model from source data
- abandon source data and transfer the model (including source hypothesis) to the target domain without accessing source data.
- study how to design better network architectures for both models to improve adaptation performance.
- SHOT minimises this loss overall. each part is explained in subsequent section
![](https://i.imgur.com/kQpVU3e.png)
### Source model generation
- develop a deep neural network and learn the source model.
- Use cross entropy loss
- Use smooth labelling
### Source Hypothesis Transfer with Information Maximization (SHOT-IM)
- Develop a Source HypOthesis Transfer (SHOT) framework by learning the domain specific feature encoding module while fixing the source classifier module (hypothesis)
- SHOT uses the same classifier module for different domain-specific feature learning modules
- the ideal target outputs should be similar to one-hot encoding but differ from each other.
- For this purpose, they adopt the information maximization (IM) loss to make the target outputs individually certain and globally diverse. Information Min. loss is used.
![](https://i.imgur.com/jf0SIYi.png)
### Source Hypothesis Transfer Augmented with Self-supervised Pseudo-labeling
> **What I understood**
> We have to initialise the centroids firsts. But instead of randomly initialising we will initialise smartly. Here we already have a classifier. So it can give output probability of particular class. We may directly average out distance but instead we do weighted average where the weight is decided by the softmax op of the trained encoder.
- Inspired from DeepCluster
- First, attain the centroid for each class in the target domain, similar to weighted k-means clustering,
![](https://i.imgur.com/nuziSdv.png)
- This basicially uses feature representation and its predicted probalities to find clustering centers.
- These centroids can robustly and more reliably characterize the distribution of different categories within the target domain.
- obtain the pseudo labels via the nearest centroid classifier. $D_f(a,b)$ is cosine distance bw a and b.
![](https://i.imgur.com/WNUZAUT.png)
- Since we have the classes we obtained from prev step we will use this to genetere new pseduo labels.
![](https://i.imgur.com/RzKnuFF.png)
## Conclusion
- SHOT learns the optimal target-specific feature learning module to fit the source hypothesis by exploiting the information maximization and self-supervised pseudo-labeling.
- Experiments for both digit and object recognition verify that SHOT achieves competitive and even state-of-the-art performance.