Abstract: Graph networks have recently attracted considerable interest, and in particular in the context of semi-supervised learning. These methods typically work by generating node representations that are propagated throughout a given weighted graph.
Here we argue that for semi-supervised learning, it is more natural to consider propagating labels in the graph instead. Towards this end, we propose a differentiable neural version of the classic Label Propagation (LP) algorithm. This formulation can be used for learning edge weights, unlike other methods where weights are set heuristically. Starting from a layer implementing a single iteration of LP, we proceed by adding several important non-linear steps that significantly enhance the label-propagating mechanism.
Experiments in two distinct settings demonstrate the utility of our approach.
Keywords: semi supervised learning, graph networks, deep learning architectures
TL;DR: Neural net for graph-based semi-supervised learning; revisits the classics and propagates *labels* rather than feature representations
13 Replies
Loading