Leave one Expert Out: Robust Uncertainty Quantification via Intrinsic Cross-Validation

ICLR 2026 Conference Submission13842 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep learning, uncertainty quantification, mixture of experts
Abstract: Estimating epistemic uncertainty remains an important challenge in modern Deep Learning (DL). We propose a novel architecture, called Leave one Expert Out (LEO), which is a form of a mixture-of-experts model with latent-space-distance-aware router and a null expert, representing prior belief, to which output of the model collapses if testing datapoint is too different from any of datapoints experts were trained on. This architecture allows to temporarily drop experts from the model, and we utilise this property to train the router to leverage the predictions of remaining experts to make predictions for the datapoints normally assigned to the expert currently removed from the model. We coin this mechanism \textit{intrinsic cross-validation} and show, such a trained router excels at estimating epistemic uncertainty for both in and out of distribution inputs. We demonstrate state-of-art performance on uncertainty quantification in regression benchmarks, such as UCI problems or age prediction on UTK-Face, and CIFAR10 classification benchmark. We also show the proposed method can achieve superior performance in surrogate-based black-box optimization.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 13842
Loading