Responsibility: A General Instance and Training Process-based Explainable AI Approach

TMLR Paper5261 Authors

01 Jul 2025 (modified: 03 Sept 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Explainable Artificial Intelligence (XAI) methods focus on helping human users better understand the decision making of an AI agent. However, many modern XAI approaches are not actionable to end users, particularly those without prior AI or ML knowledge. In this paper, we formally define and extend an XAI approach called Responsibility, which identifies the most responsible training instance for a particular model decision based on observing the model's training process. This instance can then be presented as an explanation: ``this is what the AI agent learned that led to that decision.'' We present experimental results across a number of domains and architectures, along with the results of a user study. Our results demonstrate that Responsibility can help improve the performance of both human end users and secondary ML models.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=E1FHRlWvZQ
Changes Since Last Submission: As per the suggestion of the action editor on the previous submission which consisted of 16 pages of main content, we are resubmitting this work as a regular length submission consisting of 12 pages of main content.
Assigned Action Editor: ~Simone_Scardapane1
Submission Number: 5261
Loading