Invertible Neural Networks for Graph PredictionDownload PDF

24 Sept 2022, 19:02 (modified: 22 Nov 2022, 06:46)NeurIPS 2022 GLFrontiers WorkshopReaders: Everyone
Keywords: invertible neural networks, normalizing flow, graph neural network
TL;DR: We propose a new framework for conditional generation, which is scalable to data on large graphs, using deep invertible residual networks through normalizing flow.
Abstract: Graph prediction problems prevail in data analysis and machine learning. The inverse prediction problem, namely to infer input data from given output labels, is of emerging interest in various applications. In this work, we develop \textit{invertible graph neural network} (iGNN), a deep generative model to tackle the inverse prediction problem on graphs by casting it as a conditional generative task. The proposed model consists of an invertible sub-network that maps one-to-one from data to an intermediate encoded feature, which allows forward prediction by a linear classification sub-network as well as efficient generation from output labels via a parametric mixture model. The invertibility of the encoding sub-network is ensured by a Wasserstein-2 regularization which allows free-form layers in the residual blocks. The model is scalable to large graphs by a factorized parametric mixture model of the encoded feature and is computationally scalable by using GNN layers. We study the invertibility of flow mapping based on theories of optimal transport and diffusion process. The proposed iGNN model is experimentally examined on synthetic data, including the example on large graphs, and the empirical advantage is also demonstrated on real-application datasets of solar ramping event data and traffic flow anomaly detection.
1 Reply