# Helmholtz AI FFT seminar series #2: </br> Eric Upschulte
###### tags: `HelmholtzAI`,`FFT`
[ToC]
## :memo: Seminar details
**6 May 2021, 11:00 - 12:00**
- Speaker: **Eric Upschulte**, Helmholtz AI research group - PhD student @ Forschungszentrum Jülich (FZJ)
- Title: **Recent paradigms for deep generative modeling**
- Chair: **Markus Götz**, KIT (Head of Helmholtz AI Consultant Team)
## :memo: Notes
:::info
:bulb: Write down notes and/or interesting information of the seminar
:::
- Generative adversarial networks (GANs)
- field with increasing interest, exponential growth since 2014/2015
- Vanilla framework
- noise into generator, produces a generated example
- generated example and real data are fed into a discriminator
- discrimonator is a classifier attempting to distringuish fake from real data
- joint optimization to improve the fake generation performance
- major challenge: mode collapse
- generator focuses solely on a small subset of the training distribution
- one solution: unrolled GANs
- GAN considerations
- discriminator can behave strangely, particularly if not calibrated
- generator only learns indirectly through discriminator
- consequence: locally sensible, but globally nonsensical artificats possible
- Style-based GANs (2020)
- architectural improvement over traditional GANs
- different (hierarchical) noise input in specialized synthesis network
- allows generation of personalized local features
- probability map prediction avoids 'existance check' discriminators
- discriminator may focus on a small subset of output
- if it can consistently decide local information is fake, overall generator does not improve
- counter strategy: output a whole probability map for each generated pixel
- normalizing flow models
- Allows to smoothly transition between generated examples
- input X input to flow model network f, obtain latent representation z
- z is fed back into inversion(mathematically invertible) network f^(-1) to obtain x'
- x and x' should match
- makes z flowlike/walkable
## :question: Questions for the speaker
:::info
:bulb: Write down any questions or topics you wish to discuss during the seminar
:::
:arrow_right: Q1: Invertible means bijective?
- Vanilla strictly bijective
- There are relaxed versions, recent paper proposes embedding of low-resolution images that are not invertible
:arrow_right: Q2: How are GANs used in your neuro-medical application?
- Mostly for self-supervised learning, annotations are scarce
- Simulation is another application field for
:arrow_right: Q3: How do GANs compare to other representation learners?
- GANs hard to train, but if successful usually high quality
:arrow_right: Q4: To what extent are you doing high-performance computing
- Inference is the major issue and usage of parallelization
- Mostly embarassingly parallel problem, however, there can be difficult postprocessing stages
- Own mpi4py implementations
:arrow_right: Q5: What are future directions in GAN research
- Scaling of GANs to large image dimensions
- Both parallelism but also smarter architecture/methods will enable this
- Transformer-based generators and discriminators
## :question: Your Feedback
:::info
:bulb: Write down your feedback about the seminar
:::
### Share something that you learned or liked :+1:
- ...
### Share something that you didn’t like or would like us to improve :-1:
- ...
:::info
:pushpin: Want to learn more? ➜ [HackMD Tutorials](https://hackmd.io/c/tutorials)
:::