# NeurIPS 2022 papers
## Attention/Transformers
[Fast vision transformers with HiLo attention ](https://openreview.net/forum?id=Pyd6Rh9r1OT) :

- novel self-attention mechanism HiLo, inspired by the insight that high frequencies in an image capture local fine details and low frequencies focus on global structures, whereas a multi-head self-attention layer neglects the characteristic of different frequencies.
- disentangle the high/low frequency patterns in an attention layer by separating the heads into two groups, where one group encodes high frequencies via self-attention within each local window, and another group performs the attention to model the global relationship between the average-pooled low-frequency keys from each window and each query position in the input feature map.
- Hi-Fi : 2x2 windows (or 4x4)
- [(code)](https://github.com/ziplab/LITv2.)
[Contrastive Adapters for Foundation Model Group Robustness](https://openreview.net/forum?id=uPdS_7pdA9p):
- contrastive learning framework + hard sampling as a layer on top of pretrained model.

- group robustness: reduce gap in performances across classes (but average is not necessarily better)
[A Fast Post-Training Pruning Framework for Transformers](https://openreview.net/forum?id=0GRBKLBjJE)
[Optimizing Relevance Maps of Vision Transformers Improves Robustness](https://openreview.net/forum?id=upuYKQiyxa_)
## Satellite imagery
[Open High-Resolution Satellite Imagery: The WorldStrat Dataset – With Application to Super-Resolution](https://openreview.net/forum?id=DEigo9L8xZA)
## RegAg