Neural Networks as Inter-Domain Inducing PointsDownload PDF

Published: 21 Dec 2020, Last Modified: 05 May 2023AABI2020Readers: Everyone
Keywords: Neural Networks, Gaussian Process, Inducing points
TL;DR: We formulate the hidden units of a neural network as the inter-domain inducing points of a kernel.
Abstract: Equivalences between infinite neural networks and Gaussian processes have been established for explaining the functional prior and training dynamics of deep learning models. In this paper we cast the hidden units of finite-width neural networks as the inter-domain inducing points of a kernel, then a one-hidden-layer network becomes a kernel regression model. For dot-product kernels on both $R^d$ and $S^{d−1}$, we derive the kernel functions for inducing points. Empirically we conduct toy experiments to validate the proposed approaches.
1 Reply

Loading