## Continue Graph Thinking https://hackmd.io/@teacher144123/r1-V5Bn-5 --- ## Graph Representation Learning ch5-6 - Present: Dragon Chen - Date: 2022/03/25 --- ### Recap: Graph Data ![](https://i.imgur.com/Bvcz2u1.png) - Zachary's karate club dataset ---- Basic types: - undirected - directed - weighted ---- ### Recap: Adjacency Matrix ![](https://i.imgur.com/1Df57Ew.png) Undirected graph is **symmetric** Weighted graph has **different values** ---- Advanced kinds of graph - multi-relation - heterogeneous - multiplex - ... --- ### Recap: Multi-Relation Graph ![](https://i.imgur.com/UCpa3Zf.png =1000x) Edge has different types ---- ### Recap: Decoder ![](https://i.imgur.com/2EYoPF6.png =800x) We make our decoder **multi-relational** No multi-relation: $DEC(ENC(u), ENC(v)) = S[u, v]$ $DEC(z_u, z_v) = S[u, v]$ ---- Break ![](https://i.imgur.com/M6Dhhm0.png) ---- ## Intro So, how to build a model? --- **Graph Neural Network** is coming ---- ## Message Passing ![](https://i.imgur.com/QZwgHOh.png) ---- $$ h_u^{k + 1} = UPDATE^k \left( h_u^k, AGGR^k (\{h_v^k, \exists v \in N(u)\}) \right) \\ = UPDATE^k \left( h_u^k, m_{N(u)}^k \right) $$ UPDATE and AGGREGATE functions are differentiable functions(e.g. NN). $m_{N(U)}^k$ is the message aggregated from $u$ 's neighborhood $N(u)$ ---- ### Then, some basic GNN skipable ![](https://i.imgur.com/SC3hqmO.png) ---- ### What is next, self-loop ![](https://i.imgur.com/diQTgCv.png) It's common in modern GNN ---- ### Generalization is powerful trick of NN The issue with the most basic aggregation of sum-up is that it is **unstable** and **highly sensitive to node degrees**. I know, it's simple, just change the sum-up to average. ---- ## GCN(Graph Convolution Network) Not enough. GCN (2016) is soming. It's still the basic baseline nowadays. ![](https://i.imgur.com/BPW6CVJ.png) ---- Take a break ![](https://i.imgur.com/oX6cS5C.png =500x500) ---- ![](https://i.imgur.com/hMVKfTl.png =500x500) --- Let's go complex ---- ### Set Pooling ![](https://i.imgur.com/pLXFNM0.png) Little improvement. Add the change of overfitting (cuz MLP). e.g. GraphSAGE-pool ---- ### Janossy Pooling(Permutation-Sensitive) Instead of **Permutation-Invariant**(the orders aren't matter), we can use Permutation-Sensitive approach. ![](https://i.imgur.com/SBv5vAt.png) Sum up all the possible ordering. $\prod$ are all the permutations. $\rho_\phi$ is a **Permutation-Sensitive** function. Some researchers use LSTM. ---- ### Node Attention - GAT (Graph Attention Network) ![](https://i.imgur.com/xCopKO2.png =400x) ![](https://i.imgur.com/bO8Zbzy.png) ![](https://i.imgur.com/RjShzOb.png =450x) ---- ### Transformer You can also do multi-head ---- ![](https://i.imgur.com/8H6Zpvb.png) --- ### What we got so far? - Basic GNN - Add self-loops - Normalize with the degree of node $u$ and $v$ - Set Pooling (Use the complexer MLP) - Janossy Pooling (Use permutation-sensitive model) - Node Attention (Use MLP to compute attention score) - Multi-Head Transformer --- We know there are two key operations: **Aggregation and Update**. The update stage also have choices. ---- ### The Biggest Problem - Over-Smoothing ![](https://i.imgur.com/AQIyY2X.png) When $K \rightarrow \infty$, the embedding become the distribution of the whole graph (based on the random-walk theory). In the meantime, the local information from neighborhood will be lost. ---- ### Concatenation and Skip-Connections ![](https://i.imgur.com/a5ILV4V.png) ---- ### Gated Update Use GRU to update $h$ ![](https://i.imgur.com/mDu3j6i.png) ---- ### Jumping Knowledge Connections ![](https://i.imgur.com/fVWOTBu.png) To improve the quality, simply learn the representation of the **last layer from each layers of message passing**. ![](https://i.imgur.com/w7Jb6Ua.png) --- ## Edge Feature and Multi-Relation ---- ### Relational GNN ![](https://i.imgur.com/xuocGbk.png) Increase in the number of parameters, as now we have one trainable matrix per relation type. ---- ### Attention and Feature Concatenation ![](https://i.imgur.com/M41LLrb.png) ---- ### Graph Pooling
{"metaMigratedAt":"2023-06-16T21:17:34.650Z","metaMigratedFrom":"YAML","title":"Graph Representation Learning ch5~6","breaks":true,"slideOptions":"{\"theme\":\"serif\",\"transition\":\"slide\"}","contributors":"[{\"id\":\"1155b8f4-1581-4900-adcf-1b4abbb2c7fd\",\"add\":9017,\"del\":4716}]"}
    357 views