Robust Uncertainty-Aware Learning via Boltzmann-weighted NLL

ICLR 2026 Conference Submission7389 Authors

16 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: robust estimation, uncertainty estimation
TL;DR: We introduce Robust-NLL for modeling uncertainty under the presence of outliers.
Abstract: Uncertainty estimation is critical for deploying deep learning models in high-stakes applications such as autonomy and decision-making. While prior works on data uncertainty modeling estimate aleatoric uncertainty by minimizing the negative log-likelihood (NLL) loss, they often fail under the presence of outliers. To address this limitation, we introduce Robust-NLL, a drop-in replacement for vanilla NLL that filters noisy or adversarial samples. Robust-NLL learns robust uncertainty estimates in neural networks through a Boltzmann-weighted NLL loss that requires no architectural changes, additional parameters, or iterative procedures, and acts as a plug-and-play loss function that maintains full differentiability and mini-batch compatibility. We evaluate our approach on synthetic regression tasks and real-world visual localization benchmarks with injected outliers. Experimental results demonstrate that simply replacing NLL with Robust-NLL consistently improves both prediction accuracy and reliability of uncertainty estimates, achieving substantial performance gains across diverse tasks and architectures.
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Submission Number: 7389
Loading