XC: Exploring Quantitative Use Cases for Explanations in 3D Object DetectionDownload PDF

Published: 17 Oct 2021, Last Modified: 05 May 2023XAI 4 Debugging Workshop @ NEURIPS 2021 PosterReaders: Everyone
Keywords: Explainable AI, XAI, 3D object detection, LiDAR data, false positive detection, Integrated Gradients
TL;DR: Using post-hoc explanations to help identify true positive vs. false positive detected objects from LiDAR data.
Abstract: Explainable AI (XAI) methods are frequently applied to obtain qualitative insights about deep models' predictions. However, such insights need to be interpreted by a human observer to be useful. In this paper, we aim to use explanations directly to make decisions without human observers. We adopt two gradient-based explanation methods, Integrated Gradients (IG) and backprop, for the task of 3D object detection. Then, we propose a set of quantitative measures, named Explanation Concentration (XC) scores, that can be used for downstream tasks. These scores quantify the concentration of attributions within the boundaries of detected objects. We evaluate the effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KITTI and Waymo datasets. The results demonstrate improvement of more than 100\% on both datasets compared to other heuristics such as random guesses and number of LiDAR points in bounding box, raising confidence in XC's potential for application in more use cases. Our results also indicate that computationally expensive XAI methods like IG may not be more valuable when used quantitatively compare to simpler methods.
0 Replies

Loading