Bridging the Gap Between Explainable AI and Uncertainty Quantification to Enhance Trustability

Published: 01 Jan 2021, Last Modified: 13 May 2025CoRR 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: After the tremendous advances of deep learning and other AI methods, more attention is flowing into other properties of modern approaches, such as interpretability, fairness, etc. combined in frameworks like Responsible AI. Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored. In this paper, I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview