ReconstructionNet: A Neural Network Architecture for Uncertainty-Aware Predictions with Explainability
Keywords: Uncertainty Estimation, Uncertainty Attribution, Neural Networks
TL;DR: This paper introduces ReconstructionNet, an uncertainty-aware model that classifies, performs uncertainty estimation and provides feature-level explanations in a single pass.
Abstract: Uncertainty estimation quantifies a model’s confidence in its predictions, fostering calibrated trust among users. Existing approaches face two key limitations: (1) most capture only a single type of uncertainty, and (2) they incur additional training or inference overhead. We propose ReconstructionNet, a neural network that addresses these limitations by modeling the joint input–output distribution with class-specific autoencoders. This enables simultaneous prediction and estimation of both aleatoric and distributional uncertainty in a single pass. Across five real-world datasets, ReconstructionNet matches or surpasses baseline classifiers while producing uncertainty estimates with greater reliability, selectivity, robustness to false negatives, and strong out-of-distribution detection. Furthermore, ReconstructionNet’s architecture naturally supports uncertainty explanations, revealing how individual features contribute to prediction uncertainty without extra computation. Experiments demonstrate that these explanations highlight misclassified regions consistent with human intuition. Together, these contributions establish ReconstructionNet as a unified framework for trustworthy and interpretable artificial intelligence.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 7261
Loading