VAADER - Image & IA - Lecture Group

@imageia

VAADER - Image & IA - Lecture Group

Public team

Joined on Oct 12, 2021

  • Organization contact [name= F. Lemarchand (flemarch at insa-rennes.fr)] Next Meeting(s) 🌴 The reading group is in holidays. 🌴 See you in september! Past Meetings 2021 Karol Desnos: Paper Digest: "Dynamic Neural Networks: A Survey"
     Like  Bookmark
  • by Meriem Bayou-Outtas (IETR-VAADER) - 2021.06.16 Abstract The ability of an observer to perform a specific task on images, produced by a given medical imaging systems, defines an objective measure of image quality. If the observer is “numerical”, can deep learning methods “do the job”? What we found in the literature? Some papers rise this issue and propose to approximate the Ideal Observer for performing tasks detection and localization. Related Material [1] Approximating the Ideal Observer and Hotelling Observer for Binary Signal Detection Tasks by Use of Supervised Learning Methods, IEEE TMI 2019 [2] Approximating the Ideal Observer for Joint Signal Detection and Localization Tasks by use of Supervised Learning Methods, IEEE TMI 2020
     Like  Bookmark
  • by Karol DESNOS (IETR-VAADER) - 2021.07.01 02:00 PM Abstract Dynamic neural network is an emerging research topic in deep learning. Compared to static models which have fixed computational graphs and parameters at the inference stage, dynamic networks can adapt their structures or parameters to different inputs, leading to notable advantages in terms of accuracy, computational efficiency, adaptiveness, etc. In this survey, we comprehensively review this rapidly developing area by dividing dynamic networks into three main categories: 1) instance-wise dynamic models that process each instance with data-dependent architectures or parameters; 2) spatial-wise dynamic networks that conduct adaptive computation with respect to different spatial locations of image data and 3) temporal-wise dynamic models that perform adaptive inference along the temporal dimension for sequential data such as videos and texts. The important research problems of dynamic networks, e.g., architecture design, decision making scheme, optimization technique and applications, are reviewed systematically. Finally, we discuss the open problems in this field together with interesting future research directions Related Material Dynamic Neural Networks: A Survey Slides
     Like  Bookmark
  • by MickaĂ«l Dardaillon (IETR - Vaader) - 2021.05.06 - 2:00 PM Abstract "The synergy between the large datasets in the cloud and the numerous computers that power it has enabled remarkable advancements in machine learning, especially in DNNs. […] That changed in 2013 when a projection in which Google users searched by voice for three minutes per day using speech recognition DNNs would double Google datacenters’ computation demands." This presentation will introduce the concepts behind the hardware architectures used to support current growth in machine learning, including GPUs and TPUs. Related Material A Domain-Specific Architecture for Deep Neural Networks
     Like  Bookmark
  • by Mouath Aouayeb (IETR - Vaader) - 2021.04.22 Abstract Traveling at the time of coronavirus is difficult with the restrictions set by governments all around the world and that’s why most international meetings and conferences are held online. On the other side, deep learning has grown significantly in the past few years and especially for vision applications. Different architectures and models from CNNs to Transformers have been proposed. In this talk, we will not present another model, but we will list different techniques, layers, loss functions, and optimizers that can improve the performance of your model. Also, an analogy between travel and deep learning is presented in the beginning. Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-22-04-Aouayeb-Travel.pdf %}
     Like  Bookmark
  • by Alexandre Honorat (IETR - Vaader) - 2021.04.08 Abstract CNN are now widely used so it is necessary to implement them efficiently. To do so, CNN are most commonly implemented on GPU processors, and also a bit on FPGA. In this talk, without entering into the details, we will list some problems arising when implementing the CNN inferences, especially on FPGA. We will also link these problems to the CNN models themselves and we will highlight a few general recommendations extracted from the following papers.
     Like  Bookmark
  • by Joseph Faye (IETR - Vaader) - 2021.03.18 Abstract ResNet [1], Highway networks [2] to DenseNet [3], adding more inter-layer connections besides the direct connection in adjacent layers, emerged as popular approaches to strengthen feature propagation among different layers. However, dense connections cause much redundancy especially in the case of DenseNet. Another aspect is that for many dense connections from previous layers, the role played by the mainstream module is unclear. To address these issues, authors introduce a gating mechanism, inspired by SENeT [4] to model the layer relationships in densely connected blocks. A Dense-Gated U-Net for Brain Lesion Segmentation Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-18-03-Faye-DenseGated.pdf %}
     Like  Bookmark
  • by Alban Marie (IETR - Vaader) - 2021.03.04 Abstract Nowadays, it is well established that ConvNets are able to achieve incredible performance on complex vision task such as classification, object recognition or semantic segmentation. A common thought is that humans and ConvNets are able to solve these tasks by learning increasingly complex representations of object shapes. However, recent studies show that humans and ConvNets have indeed very different strategies by not being biased towards the same information in images. To this end, authors propose a stylized version of ImageNet , allowing ConvNets to learn images representation used by humans easier. Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-04-03-Marie-StylizedImageNet.pdf %} Presentation and Discussions
     Like  Bookmark
  • by Nicolas Beuve (IETR - Vaader) - 2021.02.18 Abstract Image-to-image translation is a realm aiming at transposing images from one representation to another, like generating an aerial map of a region based on a photograph. Results in this field were greatly improved since the arrival of GAN models in 2014. GANs (Generative Adversarial Nets) are neural networks, specialized in sample generation. When applied to an image,
     Like  Bookmark
  • by Paul Peyramaure (IETR - Vaader) - 2021.02.04 Abstract BERT, which stands for Bidirectional Encoder Representations from Transformer, has been published by a Google AI team in 2018. It has been presented as a new cutting-edge model for Natural Language Processing (NLP). Based on Transformer achitecture, it is design to learn bidirectional representations by considering both the left and right contexts in all its layers. While being initially introduced for NLP tasks, it has recently been used to model other tasks such as action recognition. Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-04-02-Peyramaure-BERT.pdf %}
     Like  Bookmark
  • by Florian Lemarchand (IETR - Vaader) - 2021.01.21 Abstract While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. It has been shown that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on multiple image tasks. Slides {%pdf https://florianlemarchand.github.io/ressources/pdfs/VAADER_Reading_Group/2021-21-01-Lemarchand-Transformers.pdf %} Related Material:
     Like  Bookmark