Semantic Attribution For Explainable Uncertainty Quantification

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Primary Area: probabilistic methods (Bayesian methods, variational inference, sampling, UQ, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Uncertainty Quantification, Model Explanability
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Bayesian deep learning, with an emphasis on uncertainty quantification, is receiving growing interest in building reliable models. Nonetheless, interpreting and explaining the origins and reasons for uncertainty presents a significant challenge. In this paper, we present semantic uncertainty attribution as a tool for pinpointing the primary factors contributing to uncertainty. This approach allows us to explain why a particular image carries high uncertainty, thereby making our models more interpretable. Specifically, we utilize the variational autoencoder to disentangle different semantic factors within the latent space and link the uncertainty to corresponding semantic factors for an explanation. The proposed techniques can also enhance explainable out-of-distribution (OOD) detection. We can not only identify OOD samples via their uncertainty, but also provide reasoning rooted in a semantic concept.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6366
Loading