---
tags: object
---
# Geometric Deep Learning
## Papers
- [Geometric Deep Learning](https://arxiv.org/pdf/2104.13478.pdf) - Bible Book
- [website](https://geometricdeeplearning.com/)
- [lecture recordings](https://geometricdeeplearning.com/lectures/)
- [Thesis- E(2) - Equivariant Steerable CNNs](https://gabri95.github.io/Thesis/thesis.pdf)
## Resources for group theory
- [Visual Group Theory Lecture](http://www.math.clemson.edu/~macaule/classes/m19_math4120/)
- [Visual tools](https://nathancarter.github.io/group-explorer/GroupExplorer.html)
- [Introduction to Group Theory with problem and solutions](https://mathdoctorbob.org/UReddit.html)
## People
- [Taco Cohen](https://tacocohen.wordpress.com/)
- papers in medical images
- [Rotation Equivariant CNNs for Digital Pathology](https://arxiv.org/pdf/1806.03962.pdf)
- [Pulmonary Nodule Detection in CT Scans with Equivariant CNNs](https://marysia.nl/assets/MIA.pdf)
- [3D G-CNNs for Pulmonary Nodule Detection](https://arxiv.org/pdf/1804.04656.pdf)
- [Max Welling](https://staff.fnwi.uva.nl/m.welling/)
- Geometric deep learning
- Application in biology, chemistry, physics, medical fields.
### talk
Geometry is the study of invariants or symmetries, the properties that remain unchanged under some class of transformations (group).

This can be helpful in providing a constructive procedure to build neural network architectures in a principled way.
## Lecture 5 Graphs and Sets I
### permutation invariant and equivariant over set
invariant: $f(\textbf{PX})=f(\textbf{X})$
$f(\textbf{X})=\phi(\oplus_{i\in V}\psi(\textbf{x}_i))$
$\oplus$ might be sum, max, average operators.
Equivariant: $\textbf{F}(\textbf{PX})=\textbf{P}\textbf{F}(\textbf{X})$, $\textbf{F}$ takes a matrix as input and output an matrix.
### permutation invariant and equivariant over graph
invariant: $f(\textbf{PX}, \textbf{PAP}^T)=f(\textbf{X})$
Equivariant: $\textbf{F}(\textbf{PX}, \textbf{PAP}^T)=\textbf{P}\textbf{F}(\textbf{X, A})$, $\textbf{F}$ takes a matrix as input and output an matrix.
Prove:
$convolutional\subseteq attentional\subseteq message$-$passing$.
### Transformers are GNN
- fully connected graph GAT
- Why are transformers sequence based? Because the positional embeddings are injected into features, i.e., the sin and cos tell you what position this node in the sequence.
- If we drop those positional embeddings, you will have the same fully connected GAT.