# Maybe BERT is all you need ? _by Paul Peyramaure (IETR - Vaader) - 2021.02.04_ <!--[Zoom](https://us04web.zoom.us/j/74260167620?pwd=eWRPMnlZeWRDNUdiMVRvNDdLWi9OUT09)--> ###### tags: `VAADER` `Reading Group` ## Abstract <div style="text-align: justify"> BERT, which stands for Bidirectional Encoder Representations from Transformer, has been published by a Google AI team in 2018. It has been presented as a new cutting-edge model for Natural Language Processing (NLP). Based on Transformer achitecture, it is design to learn bidirectional representations by considering both the left and right contexts in all its layers. While being initially introduced for NLP tasks, it has recently been used to model other tasks such as action recognition. </div> ![](https://i.imgur.com/YqbgcrJ.jpg) ## Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-04-02-Peyramaure-BERT.pdf %} ## Related Material: * [BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding](https://arxiv.org/pdf/1810.04805.pdf) * [Late Temporal Modeling in 3D CNN Architectures with BERT for Action Recognition](https://arxiv.org/pdf/2008.01232.pdf)