Abstract: Graph Neural Networks (GNNs) are the cur- rent state-of-the-art models in learning node representations for many predictive tasks on graphs. Typically, GNNs reuses the same set of model parameters across all nodes in the graph to improve the training efficiency and exploit the translationally-invariant proper- ties in many datasets. However, the parame- ter sharing scheme prevents GNNs from dis- tinguishing two nodes having the same lo- cal structure and that the translation invari- ance property may not exhibit in real-world graphs. In this paper, we present Graph Adaptive Mixtures (GraphAdaMix), a novel approach for learning node representations in a graph by introducing multiple indepen- dent GNN models and a trainable mixture distribution for each node. GraphAdaMix can adapt to tasks with different settings. Specifically, for semi-supervised tasks, we op- timize GraphAdaMix using the Expectation- Maximization (EM) algorithm, while in un- supervised settings, GraphAdaMix is trained following the paradigm of contrastive learn- ing. We evaluate GraphAdaMix on ten benchmark datasets with extensive experi- ments. GraphAdaMix is demonstrated to consistently boost state-of-the-art GNN vari- ants in semi-supervised and unsupervised node classification tasks. The code of GraphAdaMix is available online.
Loading