Scale Mixtures of Neural Network Gaussian ProcessesDownload PDF

29 Sept 2021, 00:33 (modified: 07 Mar 2022, 11:30)ICLR 2022 PosterReaders: Everyone
Keywords: Neural Network Gaussian Processes, Infinitely-wide Neural Networks, Scale Mixtures of Gaussians, Heavy-tailed Stochastic Processes
Abstract: Recent works have revealed that infinitely-wide feed-forward or recurrent neural networks of any architecture correspond to Gaussian processes referred to as NNGP. While these works have extended the class of neural networks converging to Gaussian processes significantly, however, there has been little focus on broadening the class of stochastic processes that such neural networks converge to. In this work, inspired by the scale mixture of Gaussian random variables, we propose the scale mixture of NNGP for which we introduce a prior distribution on the scale of the last-layer parameters. We show that simply introducing a scale prior on the last-layer parameters can turn infinitely-wide neural networks of any architecture into a richer class of stochastic processes. With certain scale priors, we obtain heavy-tailed stochastic processes, and in the case of inverse gamma priors, we recover Student’s $t$ processes. We further analyze the distributions of the neural networks initialized with our prior setting and trained with gradient descents and obtain similar results as for NNGP. We present a practical posterior-inference algorithm for the scale mixture of NNGP and empirically demonstrate its usefulness on regression and classification tasks. In particular, we show that in both tasks, the heavy-tailed stochastic processes obtained from our framework are robust to out-of-distribution data.
One-sentence Summary: Infinitely-wide neural networks can be equivalent to scale mixtures of Gaussian processes.
Supplementary Material: zip
15 Replies