EviMix: Evidential Deep Learning with Latent-Space Mixing for Uncertainty Quantification and OOD Detection

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: uncertainty quantification, evidential deep learning, feature space mixing, reliability, robustness, supervised learning, representation learning, predictive uncertainty, out-of-distribution detection, aleatoric uncertainty, epistemic uncertainty, data augmentation, stress testing, calibration, trustworthiness, autonomous systems
TL;DR: We introduce LatentMix, a feature-space augmentation strategy that disentangles aleatoric and epistemic uncertainty in Evidential Deep Learning, improving calibration, robustness, and OOD detection beyond pixel-space methods.
Abstract: Reliable uncertainty quantification (UQ) is essential for deploying deep neural networks in safety-critical domains such as autonomous driving and medical imaging. Evidential Deep Learning (EDL) provides a computationally efficient framework for estimating epistemic and aleatoric uncertainty through Dirichlet evidence assignment, enabling real-time uncertainty estimation. However, recent studies raise concerns about robustness, including conflation of uncertainty types, persistent epistemic uncertainty under abundant data, and sensitivity to training dynamics. The interaction between EDL and modern augmentation strategies remains poorly understood. This work introduces three contributions: (1) analysis of how pixel-space mix-based augmentations affect EDL uncertainty and OOD metrics, (2) EviMix, a feature-space evidence-mixing framework performing cross-depth latent interpolation with layer-wise severities that decay with depth-early layers emphasize stronger low-level perturbations; late layers emphasize semantic interpolation-and (3) coupling these severities into the EDL objective to regulate aleatoric and epistemic evidence during training. Experiments demonstrate that EviMix improves OOD detection, enhances functional specialization among uncertainty components, and yields stronger calibration gains relative to pixel-space and single-layer feature mixing baselines.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 17733
Loading