Epistemic Uncertainty and Observation Noise with the Neural Tangent Kernel

NeurIPS 2024 Workshop BDU Submission88 Authors

06 Sept 2024 (modified: 10 Oct 2024)Submitted to NeurIPS BDU Workshop 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Tangent Kernel, Bayesian Inference, Epistemic Uncertainty, Aleatoric Uncertainty, Observation Noise, Gaussian Process
TL;DR: We propose a gradient descent method for estimating the posterior mean and covariance of a Neural Tangent Kernel Gaussian Process with non-zero aleatoric noise.
Abstract: Recent work has shown that training wide neural networks with gradient descent is formally equivalent to computing the mean of the posterior distribution in a Gaussian Process (GP) with the Neural Tangent Kernel (NTK) as the prior covariance and zero aleatoric noise Jacot et al, 2018. In this paper, we extend this framework in two ways. First, we show how to deal with non-zero aleatoric noise. Second, we derive an estimator for the posterior covariance, giving us a handle on epistemic uncertainty. Our proposed approach integrates seamlessly with standard training pipelines, as it involves training a small number of additional predictors using gradient descent on a mean squared error loss. We demonstrate the proof-of-concept of our method through empirical evaluation on synthetic regression.
Submission Number: 88
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview