# Progress Update Till 14 Oct ## Checkpoints: * Mainly reading papers and trying to understand them to finish literature review. * Finished random walks implementation. Stas prediction was correct :D ## Line Graph Model: https://arxiv.org/pdf/1705.08415.pdf This paper introduces 2 elegant ideas, first is for capturing higher order dependecies and 2nd is for capturing directed relationships. Higher order interactions approach: Adjacency matrix of a graph $A^1$ basically is a binary matrix. What happens when we multiply it? $$A^2_{i,j} = \sum_{k \in V} A^1_{i,k} A^1_{k,j}$$ $A^2_{i,j}$ will correspond to the number of paths between $i,j$ with length exactly 2. Basically we can consider it as binary matrix and use or operations. Now let's calculate these matrices for each hop $\{2^0,2^1,2^2,...2^{log\,V}\}$ Finding a path now of whatever length is very trivial. ![](https://i.imgur.com/gyrxI4m.png) $F=(A^1,A^2,A^4...)$ $[O_i \, . x]$ is basically message passing. ### Full Chinese Mode (Same paper): Maintain 2 graphs, the original graph and the edges graph. Each edge would be converted to a node. $(i->j) \& (i' -> j')$ are basically connected if and only if $i'=j \,\&\, i \neq j$ First of all, edges are duplicated and considered as unidirectional to capture information. ![](https://i.imgur.com/FVjAr3a.png) $F=(A^1,A^2,A^4...)$ $F'=(B^1,B^2,B^4...)$ where $B$ is basically the adjacency matrix for the edges graph. $F''=(P,\hat{P})$ $P_{u,(u,v)} = 1$ $P_{u,(v,u)} = 1$ $\hat{P}_{u,(u,v)} = 1$ $\hat{P}_{u,(v,u)} = -1$ $z$ is activations for normal graph, $w$ is for edges. First equation: First term is the same as previous one. Second term adds for each node the activation of its incident edges (oriented and unoriented case is considered). While I consider it chinese mode, however the ideas used are each very effective and could be used on its own. ## IGMC https://arxiv.org/pdf/1905.08108.pdf * Inductive * No node features whatsoever Main approach: * Consider each edge (u,v) * Sample a hop subgraph using BFS * Label users with distance $d$ as $d$ and items with distance $2*d$ as $2*d+1$ * Replace nodes with one hot encodings * Aggregate using R-GCN ![](https://i.imgur.com/WqJHQE1.png) * Decode the result * Apply regularization on probabilities.