tags: notes
unsupervised
domain-adaptation
Author: Akshay Kulkarni
Note: The paper was published at NIPS'16 (NeurIPS since 2018).
Brief Outline
A domain adaptation approach that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain.
Introduction
- Domain Adaptation (DA) is machine learning under the shift between training and test distributions.
- Several DA approaches aim to bridge the source and target domains by learning domain-invariant feature representations without using target labels, so that the classifier learnt on the source domain can be used on the target domain.
- Donahue et. al. 2014 and Yosinski et. al. 2014 show that deep networks can learn more transferable features for DA by disentangling explanatory factors of variations behind domains.
- Tzeng et. al. 2014, Long et. al. 2015, Ganin and Lempitsky, 2015, and Tzeng et. al. 2015 embed DA in the pipeline of deep feature learning to extract domain invariant features.
- The previous DA approaches assume that source classifier can be directly transferred to the target domain upon the learned domain-invariant feature representations. This assumption is strong in practical cases, as it is not feasible to check whether they can be shared or not.
- Hence, this paper focuses on a more general DA scenario where source and target classifier differ by a small perturbation function.
- They enable classifier adaptation by plugging several layers into deep networks to explicitly learn the residual function with reference to the target classifier. Through this, the source and target classifiers can be bridged tightly in backprop.
- They fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces (RKHS). See these notes by my colleague for an introduction to RKHS.
Methodology
- In a UDA problem, a source domain of labeled examples and a target domain of unlabeled examples are given. Source and target domains are sampled from different distributions and respectively.
- This paper aims to design a DNN that enables learning of transfer classifiers and to close the source-target discrepancy, such that the expected target risk can be bounded using the source domain labeled data.
- The distribution discrepancy may give rise to mismatches in both features and classifiers, i.e. and . Both mismatches should be fixed by joint adaptation of features and classifiers to enable effective DA.
- Classifier adaptation is more difficult than feature adaptation since it is directly related to labels but the target domain is unlabeled.
Feature Adaptation
- Deep features in standard CNNs must eventually transition from general to specific along the network, and the transferability of features and classifiers will decrease when the cross-domain discrepancy increases (Yosinski et. al. 2014).
- Here, they perform feature adaptation by matching the feature distributions of multiple layers .
- They reduce feature dimensions by adding a bottleneck layer on top of the last feature layer of CNNs, and then finetune CNNs on source labeled examples such that feature distributions of the source and target are made similar under new feature representations in multiple layers .
- They propose the tensor product between features of multiple layers to perform lossless multi-layer feature fusion i.e. and . Then, they perform feature adaptation by minimizing the Maximum Mean Discrepancy (MMD) (Gretton et. al. 2012) between source and target domains using the fusion features (and called tensor MMD) as
- Here, the characteristic kernel is the Gaussian kernel function defined on the vectorizations of tensors and with bandwidth parameter .
Image Not Showing
Possible Reasons
- The image file may be corrupted
- The server hosting the image is unavailable
- The image path is incorrect
- The image format is not supported
Learn More →
Classifier Adaptation
- Although source and target classifiers are different, , they should be related to ensure the feasibility of DA. It is reasonable to assume that they differ only by a small perturbation function .
- Other methods used labeled data from the target domain to learn the perturbation function (which is a function of the input ). However, this is not possible in UDA.
- If multiple nonlinear layers can asymptotically approximate complicated functions, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, .
- Rather than expecting stacked layers to approximate , they explicitly let these layers approximate a residual function .
- While it is unlikely that identity mappings are optimal, it should be easier to find the perturbations with reference to an identity mapping, than to learn the function anew.
- This is inspired from He et. al. 2016 which bridges the input and output of residual layers by the shortcut connection (identity mapping) such that , which eases the learning of the residual function (similar to the perturbation function).
- They reformulate the residual block to bridge the source and target classifiers, and , by letting , and .
- Note that is the output of elementwise addition operation while is the output of layer, and both before softmax activation , , . See the diagram above.
- They connect the source and target classifiers using the residual block as
- Note that they use the outputs before softmax to ensure that final classifiers and will output probabilities.
- They set the source classifier as the outputs of the residual block to make it better trainable using the source labeled data. Setting is not possible as labeled data is not available, and so it can't be learned using standard backprop.
- It ensures to give valid classifiers and more importantly, makes the perturbation function dependent on both source classifier (due to backprop pipeline) and target classifier (due to the functional dependency).
- Although classifier adaptation is cast into the residual learning framework, it tends to make the target classifier not deviate much from the source classifier , but cannot guarantee that will fit target specific structures well.
- So, they use entropy minimization (Grandvalet and Hinton, 2004), which encourages low density separation between classes (i.e. reduces overlap between classes) by minimizing the entropy of class-conditional distribution on target domain data as
- Here, is the entropy function of class-conditional distribution defined as , is the number of classes and is the probability of predicting to class .
Overall Training Procedure
- Here, and are the tradeoff parameters for the tensor MMD penalty and the entropy penalty respectively.
- Note that tensor MMD penalty optimization requires carefully-designed algorithm to establish linear-time training (detailed in Long et. al. 2015).
- They also use bilinear pooling (Lin et. al. 2015) to reduce the dimensions of fusion features in tensor MMD.
- Caffe code is available, while PyTorch code has not been uploaded since 2018.
Conclusion
- This paper presented a UDA approach which enables end-to-end learning of adaptive classifiers and transferable features.
- Similar to prior DA techniques, feature adaptation is achieved by matching the distribution of features across domains.
- Unlike prior work, this also supports classifier adaptation, implemented through a residual transfer module that bridges the two classifiers.