# UDA papers reasoning
Most unsupervised domain adaptation (UDA) works motivated by transfer learning. Commonly the reason is the cost of manual data labeling or difficult data collection. Naturally, UDA researched applicable to computer vision (CV) tasks, however, the same motivations hold in time series field.
<!-- Advanced:
We solved task of activity counting. It would be perfect if we can find particular data in big datasets (like finding squats in SHL). -->
## Pseudo labeling
Pseudo labeling is one of the first suggested techniques in UDA. It is easy to move pseudo labeling from images (2D) to time series (1D).
Applicable to time series, we can transfer model trained on SHL dataset to another (custom) dataset or a subset of SHL itself. Remind, that SHL includes data collected from different body positions - different domains.
This is quite simple work, but it is beneficial for a head start to tackling UDA applying to time series.
## Open set domain adaptation
Time series classification task is not broad wild. A lot of task-specific datasets has common labels, but known to me have difference in label sets. Besides known for me human activity recognition (HAR) datasets (SHL, UCI) walking through [Google database of datasets](https://datasetsearch.research.google.com/), I found new HAR datasets released last year with impressive data amount:
1) https://data.mendeley.com/datasets/45f952y38r/2
2) https://zenodo.org/record/841301
5) https://www.kaggle.com/malekzadeh/motionsense-dataset
We can use open set domain adaptation techniques to research UDA perspectives in time series without excluding unmatched labels.
## Maximum classifier discrepancy
It is one more simple and representative work that we can apply to time series (TS) and understand if TS as well as CV suitable for UDA.
## Why not selected paper (Light-weight calibrator: a separable component for unsupervised domain adaptation)

The selected paper is interesting, because it solves domain adaptation (DA) task by learning model that modify source and target domain images, to make them indistinguishable. Actually, it works similarly to adversarial attack manner (what is highlighted in the paper). Implementation of this work is perspective because authors themselves have not published their code. The disadvantage of this work is lack of architecture explorations.
This work sound in CV domain, however, it is doubly to use with TS. There are more simple and effective classical techniques on how to preprocess sensor signals before passing to the model. Training separate unsupervised model that will add values to original signal sounds like put additional noise to signal.
**Concluding**, I highlight that domain adaptation is not proved in TS area. It can work as well as do not work. We can try in a short time period apply publically available UDA works apply to TS. The point is, to make this try, we should review paper selection constraints.