Calibrated Uncertainty Quantification for Operator Learning via Conformal Prediction

Published: 21 Sept 2024, Last Modified: 21 Sept 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Operator learning has been increasingly adopted in scientific and engineering applications, many of which require calibrated uncertainty quantification. Since the output of operator learning is a continuous function, quantifying uncertainty simultaneously at all points in the domain is challenging. Current methods consider calibration at a single point or over one scalar function or make strong assumptions such as Gaussianity. We propose a risk-controlling quantile neural operator, a distribution-free, finite-sample functional calibration conformal prediction method. We provide a theoretical calibration guarantee on the coverage rate, defined as the expected percentage of points on the function domain whose true value lies within the predicted uncertainty ball. Empirical results on a 2D Darcy flow and a 3D car surface pressure prediction task validate our theoretical results, demonstrating calibrated coverage and efficient uncertainty bands outperforming baseline methods. In particular, on the 3D problem, our method is the only one that meets the target calibration percentage (percentage of test samples for which the uncertainty estimates are calibrated) of 98%. Code is available at https://github.com/neuraloperator/neuraloperator/blob/main/scripts/train_uqno_darcy.py.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/neuraloperator/neuraloperator/blob/main/scripts/train_uqno_darcy.py
Assigned Action Editor: ~Pablo_Samuel_Castro1
Submission Number: 2635
Loading