<style>
img {
display: block;
margin-left: auto;
margin-right: auto;
}
</style>
> [Paper link](https://arxiv.org/abs/1609.02907) | [Note link](https://zhuanlan.zhihu.com/p/451108201) | [Code link](https://github.com/tkipf/gcn) | ICLR 2017
:::success
**Thoughts**
Using graph structure data with GCN, we can get more information from the data.
Not only the data itself but also the relation between those data.
Also, the limitation for structure data is the memory usage and features for the graph.
:::
## Abstract
They present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs.
It motivates the choice of their convolutional architecture via a localized first-order approximation of spectral graph convolutions.
## Introduction
They consider the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes.
This problem can be framed as graph-based semi-supervised learning, where label information is smoothed over the graph via some form of explicit graph-based regularization
$$
\tag{1} \mathcal{L}=\mathcal{L}_0+\lambda \mathcal{L}_{\text {reg }}, \quad \text { with } \quad \mathcal{L}_{\text {reg }}=\sum_{i, j} A_{i j}\left\|f\left(X_i\right)-f\left(X_j\right)\right\|^2=f(X)^{\top} \Delta f(X)
$$
- $\mathcal{L}_0$ denotes the supervised loss w.r.t. the labeled part of the graph
- $f(\cdot)$ can be a neural network like differentiable function
- $\lambda$ is a weighing factor
- $X$ is a matrix of node feature vectors
- $\Delta = D - A$ denotes the unnormalized graph Laplacian of an undirected graph $\mathcal{G}$
- $A$ is an adjacency matrix
- $D$ is a degree matrix
But $(1)$ relies on the assumption that connected nodes in the graph are likely to share the same label. But it might restrict modeling capacity, as graph edges need not necessarily encode node similarity, but could contain additional information.
Conditioning $f(\cdot)$ on the adjacency matrix of the graph will allow the model to distribute gradient information from the supervised loss $\mathcal{L}_0$ and will enable it to learn representations of nodes both with and without labels.
:::info
**Contribution**
They introduce a simple and well-behaved layer-wise propagation rule for neural network models which operate directly on graphs and show how it can be motivated from a first-order approximation of spectral graph convolutions.
:::
## Fast Approximate Convolutions On Graphs
This is a multi-layer Graph Convolutional Network (GCN) with the following layer-wise propagation rule:
$$
\tag{2} H^{(l+1)}=\sigma\left(\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}} H^{(l)} W^{(l)}\right)
$$
- $\tilde{A} = A + I_N$ is the adjacency matrix of the undirected graph $\mathcal{G}$ with added self-connections
- $W^{(l)}$ is a layer-specific trainable weight matrix
- $\tilde{D}_{ii} = \sum_j \tilde{A}_{ij}$
- $\sigma(\cdot)$ denotes an activation function
- $H^{(l)}$ is the matrix of activations in the $l^{\text{th}}$ layer, where $H^{(0)} = X$
Here, they will show that this propagation rule can be motivated via a first-order approximation of localized spectral filters on graphs.
### Spectral Graph Convolutions
They consider spectral convolutions on graphs defined as the multiplication of a signal $x \in \mathbb{R}^N$ with a filter $g_\theta = \text{diag}(\theta)$ parameterized by $\theta \in \mathbb{R}^N$ in the Fourier domain.
$$
\tag{3} g_\theta \star x = U g_\theta U^\top x
$$
where $U$ is the matrix of eigenvectors of the normalized graph Laplacian $L = I_N - D^{-\frac{1}{2}}AD^{-\frac{1}{2}} = U \Lambda U^\top$, with a diagonal matrix of its eigenvalues $\Lambda$ and $U^\top x$ being the graph Fourier transform of $x$, and $g_\theta$ as a function of the eigenvalues of $L$.
Because evaluating $(3)$ is computationally expensive. They use Chebyshev polynomials $T_k(x)$ up to $K^{\text{th}}$ order:
$$
\tag{4} g_\theta^\prime (\Lambda) \approx \sum_{k=0}^K \theta_k^\prime T_k (\tilde{\Lambda})
$$
where $\tilde{\Lambda} = \frac{2}{\lambda_\max} \Lambda - I_N$. $\theta^\prime \in \mathbb{R}^K$ is now a vector of Chebyshev coefficients.
---
Going back to the definition of a convolution of a signal $x$ with a filter $g_\theta^\prime$, we now have:
$$
\tag{5} g_\theta^\prime \star x \approx \sum_{k=0}^K \theta^\prime_k T_k (\tilde{L}) x
$$
where $\tilde{L} = \frac{2}{\lambda \max} L - I_N$; as can easily be verified by noticing that $(U \Lambda U^\top)^k = U \Lambda^k U^\top$.
The complexity of evaluating reduce $\mathcal{O}(N^2)$ to $\mathcal{O}(|\mathcal{E}|)$.
### Layer-wise Linear Model
A neural network model based on graph convolutions can therefore be built by stacking multiple convolutional layers of the form of $(5)$, each layer followed by a point-wise non-linearity.
**They intuitively expect that such a model can alleviate the problem of overfitting on local neighborhood structures for graphs with very wide node degree distributions**, such as social networks, citation networks, knowledge graphs and many other real-world graph datasets.
In this linear formulation of a GCN they further approximate $\lambda \approx 2$, as they can expect that neural network parameters will adapt to this change in scale during training.
$$
\tag{6} g_{\theta^{\prime}} \star x \approx \theta_0^{\prime} x+\theta_1^{\prime}\left(L-I_N\right) x=\theta_0^{\prime} x-\theta_1^{\prime} D^{-\frac{1}{2}} A D^{-\frac{1}{2}} x
$$
The filter parameters can be shared over the whole graph. Successive application of filters of this form then effectively convolve the $k^{\text{th}}$ -order neighborhood of a node, where $k$ is the number of successive filtering operations or convolutional layers in the neural network model.
In practice, it can be beneficial to constrain the number of parameters further to address overfitting and to minimize the number of operations (such as matrix multiplications) per layer.
$$
\tag{7} g_\theta \star x \approx \theta\left(I_N+D^{-\frac{1}{2}} A D^{-\frac{1}{2}}\right) x
$$
Repeated application of this operator can therefore lead to numerical instabilities and exploding/vanishing gradients when used in a deep neural network model.
To alleviate this problem, they introduce the following renormalization trick: $I_N + D^{-\frac{1}{2}} A D^{-\frac{1}{2}} \rightarrow \tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}}$, with $\tilde{A} = A + I_N$ and $\tilde{D}_{ii} = \sum_j \tilde{A}_{ij}$.
At the end, they can generalize this definition to a signal $X \in \mathbb{R}^{N \times C}$ with $C$ input channels and $F$ filters or feature maps
$$
\tag{8} Z=\tilde{D}^{-\frac{1}{2}} \tilde{A} \tilde{D}^{-\frac{1}{2}} X \Theta
$$
- $\Theta \in \mathbb{R}^{C \times F}$ is now a matrix of filter parameters
- $Z \in \mathbb{R}^{N \times F}$ is the convolved signal matrix
This filtering operation has complexity $\mathcal{O}(|\mathcal{E}|FC)$.
**Now, $\tilde{A}X$ can be efficiently implemented as a product of a sparse matrix with a dense matrix.**
## Semi-supervised Node Classification

In this section, they use the model $f(X, A)$ for efficient information propagation on graphs, it can return to the problem of semi-supervised node classification.
They expect the model can learn both on the data $X$ and on the adjacency matrix $A$ of the underlying graph structure.
### Example
Below is a two-layer GCN for semi-supervised node classification on a graph with a symmetric adjacency matrix $A$.
Firstly, they calculate $\hat{A} = \tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{-\frac{1}{2}}$
And, the forward model will be
$$
\tag{9} Z=f(X, A)=\operatorname{softmax}\left(\hat{A} \operatorname{ReLU}\left(\hat{A} X W^{(0)}\right) W^{(1)}\right)
$$
- $W^{(0)} \in \mathbb{R}^{C \times H}$ is an input-to-hidden weight matrix for a hidden layer with $H$ feature maps
- $W^{(0)} \in \mathbb{R}^{H \times F}$ is a hidden-to-output weight matrix
For semi-supervised multi-class classification, we then evaluate the cross-entropy error over all labeled examples:
$$
\tag{10} \mathcal{L} = -\sum_{l \in \mathcal{Y}_L} \sum_{f=1}^F Y_{lf} \ln Z_{lf}
$$
where $\mathcal{Y}_L$ is the set of node indices that have labels.
### Implemantation
Here, they use equation $(9)$ as implementation. The time complexity $\mathcal{O}(|\mathcal{E}|CHF)$.
## Experiments
- Semi-supervised document classification in citation networks
- Semi-supervised entity classification in a bipartite graph extracted from a knowledge graph
### Datasets

### Experimental Set-up
They train two-layer GCN and evaluate prediction accuracy on a test set of 1000 labeled examples.
For the citation network datasets, they optimize hyperparameters on Cora only and use the same set of parameters for Citeseer and Pubmed.
They train all models for a maximum of 200 epochs (training iterations) using Adam with a learning rate of 0.01 and early stopping with a window size of 10, i.e. they stop training if the validation loss does not decrease for 10 consecutive epochs.
### Baselines
- Label propagation
- Semi-supervised embedding (SemiEmb)
- Manifold regularization (ManiReg)
- Skip-gram based graph embeddings (DeepWalk)
- Iterative classification algorithm (ICA)
- Planetoid, where they always choose their best-performing model variant (transductive vs. inductive) as a baseline
## Results


## Discussion
- **Memory requirement**
In the current setup with full-batch gradient descent, memory requirement grows linearly in the size of the dataset.
- **Directed edges and edge features**
Their framework currently does not naturally support edge features and is limited to undirected graphs (weighted or unweighted).
- **Limiting assumptions**
For some datasets, however, it might be beneficial to introduce a trade-off parameter $\lambda$ in the definition of $\tilde{A}$
$$
\tag{11} \tilde{A} = A + \lambda I_N
$$
## Conclusion
This paper introduced a novel approach for semi-supervised classification on graph-structured data. It uses an efficient layer-wise propagation rule that is based on a first-order approximation of spectral convolutions on graphs.