# Transformers: Attention is all you need! _by Florian Lemarchand (IETR - Vaader) - 2021.01.21_ <!--[Zoom](https://zoom.us/j/93796302731?pwd=ODZMQTJMMFdNNndYREU5bjZtSEhyUT09)--> ###### tags: `VAADER` `Reading Group` ## Abstract <div style="text-align: justify"> While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. It has been shown that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on multiple image tasks. </div> ### Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-21-01-Lemarchand-Transformers.pdf %} ## Related Material: * [Transformers in Vision: A Survey](https://arxiv.org/pdf/2101.01169.pdf) * [Pre-Trained Image Processing Transformer](https://arxiv.org/pdf/2012.00364.pdf) * [An image is Worth 16X16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/pdf/2010.11929.pdf) ## Miscellaneous * Implementation of a transformer using pytorch and pytorch-lightning: * https://colab.research.google.com/drive/1swXWW5sOLW8zSZBaQBYcGQkQ_Bje_bmI