# How powerful are Graph Neural Networks?
---
## 1. What are Graphs?
$\text{Given a graph }G = (V, E),\\ \text{where } V=\{v_1, v_2, ..., v_n\}, E=\{(v_i, v_j)|v_i, v_j \in V\}$

----
## 1-1. Why we need graphs?
<font size="6">Imagine that you are a NetFlix engineer, <br/>trying to recommend movies to the users... </font><br/>

----
## 1-2. What can we do to Graphs?
Let $G = (V, E)$ denote a graph with <br/>node feature vectors $X_v$ for $v \in V$
With those features of nodes, we can <br/>come up with two basic tasks:
----
1. Node Classification

2. Graph Classification

---
## 2. Graph Neural Networks:
Simply, it means the Neural Networks related to Graphs, and more insight will be given then.
----
## 2-1. How does it actually works?
By the graph structure and node features $X_v$, <br/>GNNs are trying learn the hidden representation <br/>of a node, $h_v$, or the entire graph, $h_G$.
----
### 2-2 Formualte GNN Models
<font size="6">Considering the importance of neighborhood, one can learn the feature of each node by:</font><br/>
$a^{(k)}_v = AGGREGATE^{(k)}(\{h^{(k-1)}_u|u \in \mathcal{N}(v)\}$
$h^{(k)}_v = COMBINE^{(k)}(h^{(k-1)}_v,a^{(k)}_v)$
<font size="6">where $h^{(k)}_{v}$ is the feature vector of node $v$ at the k-th iteration/layer, $N(v)$ is a set of nodes adjacent to $v$, and $h^{(0)}_{v}$ can be initialize by X_v.</font><br/>
----
## 2-3 Choosing the Aggregators and Combinators:
<font size="6">2 examples are given here:</font>
GraphSage:
Aggregator: $\tiny a^{(k)}_v = MAX(\{Relu(W_A \cdot h^{(k-1)}_u), \forall u \in \mathcal{N}(v)\})$
Combinator: $\small W_C \cdot [h^{(k-1)}_v, a^{(k)}_v]$
Graph Convolutional Networks:
Aggregator+Combinator:
$h^{(k)}_v = W \cdot MEAN(\{h_u^{(k-1)}, \forall u \in \mathcal{N}(v)\cup \{v\}\})$
---
## 3. Other Paper:
1. GCN:
1-1. [Author's Blog Post](https://tkipf.github.io/graph-convolutional-networks/)
2. GraphSage
3. GAT(Graph Attention Network)
### Math:
1. Function (injective & surjective)
2. [Universal Approximation Theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem)
3. [WL-Test](https://davidbieber.com/post/2019-05-10-weisfeiler-lehman-isomorphism-test/)
4. [Understanding Isomorphism Bias in Graph Data Sets](https://arxiv.org/pdf/1910.12091.pdf)
---
{"metaMigratedAt":"2023-06-15T07:16:08.408Z","metaMigratedFrom":"YAML","title":"How powerful are Graph Neural Networks?","breaks":true,"slideOptions":"{\"transition\":\"slide\"}","contributors":"[{\"id\":\"0ebd4c38-931f-48e6-8f90-362f3184cc49\",\"add\":2990,\"del\":486}]"}