Abstract: Explainable Artificial Intelligence (XAI) methods focus on helping human users better understand the decision making of an AI agent. However, many modern XAI approaches are not actionable to end users, particularly those without prior AI or ML knowledge. In this paper, we generalize an XAI approach called Responsibility, which identifies the most responsible training instance for a particular model decision based on observing the model's training process. This instance can then be presented as an explanation: ``this is what the AI agent learned that led to that decision.'' We present experimental results across a number of domains and architectures, along with the results of a user study. Our results demonstrate that Responsibility can help improve the performance of both human end users and secondary ML models.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=w6F6YLaOXX
Changes Since Last Submission: The previous submission was desk rejected due to the font being incorrect. We have corrected the font in this resubmission.
Assigned Action Editor: ~Simone_Scardapane1
Submission Number: 5159
Loading