Split Conformal Prediction in the Function Space via Neural Operator Learning

Published: 01 Mar 2026, Last Modified: 10 Mar 2026AI&PDE PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: neural operators, conformal prediction, uncertainty quantification
TL;DR: This paper extends split conformal prediction to function-valued outputs in the context of smooth PDEs and neural operators.
Abstract: Uncertainty quantification for neural operators remains an open problem in the infinite-dimensional setting due to the lack of finite-sample coverage guarantees over functional outputs. While conformal prediction offers finite-sample guarantees in finite-dimensional spaces, it does not directly extend to function-valued outputs. Existing approaches require strong distributional assumptions or yield conservative coverage.This work extends split conformal prediction to function spaces via a discretize--then--lift construction. We first establish finite-sample coverage guarantees in a finite-dimensional space using a discretization map in the output function space. These guarantees are then lifted to the function space under a bilipschitz discretization assumption linking the discrete and continuous norms. To characterize the effect of resolution, we decompose the conformal radius into discretization, calibration, and misspecification components. This decomposition motivates a regression-based correction to transfer calibration across resolutions. Additionally, we propose two diagnostic metrics (conformal ensemble score and internal agreement) to quantify forecast degradation in autoregressive settings. Empirical results show that our method maintains calibrated coverage with less variation under resolution shifts while achieving improved coverage in super-resolution tasks.
Journal Opt In: No, I do not wish to participate
Submission Number: 23
Loading