Abstract: Bayesian deep learning, with an emphasis on uncertainty quantification, is receiving growing interest in building reliable models. Nonetheless, interpreting and explaining the origins and reasons for uncertainty presents a significant challenge. In this paper, we present semantic uncertainty attribution as a tool for pinpointing the primary factors contributing to uncertainty. This approach allows us to explain why a particular image carries high uncertainty, thereby making our models more interpretable. Specifically, we utilize the variational autoencoder to disentangle different semantic factors within the latent space and link the uncertainty to corresponding semantic factors for an explanation. The proposed techniques can also enhance explainable out-of-distribution (OOD) detection. We can not only identify OOD samples via their uncertainty, but also provide reasoning rooted in a semantic concept.
Loading