Linearization Turns Neural Operators into Function-Valued Gaussian Processes

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Efficient, theoretically sound, and structured predictive uncertainty quantification for arbitrary neural operators based on function-valued GPs.
Abstract: Neural operators generalize neural networks to learn mappings between function spaces from data. They are commonly used to learn solution operators of parametric partial differential equations (PDEs) or propagators of time-dependent PDEs. However, to make them useful in high-stakes simulation scenarios, their inherent predictive error must be quantified reliably. We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators. Our approach leverages model linearization to push (Gaussian) weight-space uncertainty forward to the neural operator's predictions. We show that this can be interpreted as a probabilistic version of the concept of currying from functional programming, yielding a function-valued (Gaussian) random process belief. Our framework provides a practical yet theoretically sound way to apply existing Bayesian deep learning methods such as the linearized Laplace approximation to neural operators. Just as the underlying neural operator, our approach is resolution-agnostic by design. The method adds minimal prediction overhead, can be applied post-hoc without retraining the network, and scales to large models and datasets. We evaluate these aspects in a case study on Fourier neural operators.
Lay Summary: Neural operators are a powerful class of AI models that can learn to predict the solutions of complex physical systems, like how fluids move or how temperatures evolve over time. They are especially useful for scientific applications because they can handle entire families of equations rather than just individual problems. However, these models currently offer no indication of how confident they are in their predictions. This is a serious issue when used in safety-critical areas like climate modeling or engineering design. To address this, we introduce LUNO, a method that equips neural operators with uncertainty estimates. The key idea is to mathematically linearize the model around its learned parameters, allowing us to translate uncertainty in the model's internal settings into a structured, probabilistic description of its predictions. This turns the neural operator into a kind of function-valued Gaussian process — a well-established tool for modeling uncertainty — but extended to the setting of operators. Our experiments show that LUNO delivers reliable uncertainty estimates with minimal computational overhead. This enables safer decision-making, more efficient data collection, and builds trust in AI tools used for scientific discovery.
Link To Code: https://github.com/2bys/luno-experiments
Primary Area: Probabilistic Methods->Bayesian Models and Methods
Keywords: Neural Operators, Uncertainty Quantification, Bayesian Deep Learning, Gaussian Processes, Partial Differential Equations
Submission Number: 10861
Loading