# Domain Balancing: Face Recognition on Long-Tailed Domains Mar 30, 2020 [Domain Balancing: Face Recognition on Long-Tailed Domains](https://arxiv.org/abs/2003.13791) In training and testing domain difference may be disbalanced. The paper highlights domain balancing problem and states it is different from class disbalance problem. While in classification resampling may be applied based on class probabilities, nothing can be done for domain classes. Just because there are no domain labels. The intuition is to proxy domain frequency with heuristics and reweight loss by domain frequency. Domain Frequency is inverse proportional of interclass compactness. ## Method ![](https://i.imgur.com/r1IR0aW.png) In the paper Domain Frequency Indicator is proposed. It is calculated with a use of Interclass Compactness $$IC(w)=\log \sum_{k \in \text{top}_K(w)}\exp(s\cos(w, w_k))$$ Then Domain Frequency Indicator is calculated as $$DFI(w)=\frac{\varepsilon}{IC(w)}$$ Loss reweighting (CosFace case) $$T(\cos \theta_{y_i}) = \cos(\theta_{y_i}) - DFI(w_{y_i}) m$$ The approach is quite general and DFI may be used within other losses lice ArcFace or even combined with CurricularFace. We may need an additional module for utilizing DFI ## Architecture In evaluation and training we can correct feature vectors to be more uniform: ![](https://i.imgur.com/4vV7Gtd.png) We may need an additional module for utilizing DFI $f(x)$ is a gate function that estimates DFI. In training it is learned and applied (without teacher forcing). In evaluation, it is applied as is. ## Experiments Experiments are convincing enough, but the combination with other losses is unclear. For CosFace loss the improvement is uniform. There is a notable sacrifice in prevalent class performance compared to baseline. How to balance $\lambda$ for RBM loss in training and how it affects? Cross validation for $\lambda$ may be crazy. Need multitask techniques?