Keywords: Explainable Artificial Intelligence, Counterfactuals, Algorithmic Recourse, Time series, Evaluation
TL;DR: TraCE leverages counterfactual explanations to assess progress in realised trajectories.
Abstract: Counterfactual explanations, and their associated algorithmic recourse, are typically leveraged to understand and explain predictions of individual instances coming from a black-box classifier. In this paper, we propose to extend the use of counterfactuals to evaluate progress in sequential decision making tasks. To this end, we introduce a model-agnostic modular framework, TraCE (Trajectory Counterfactual Explanation) scores, to distill and condense progress in highly complex scenarios into a single value. We demonstrate TraCE’s utility by showcasing its main properties in two case studies spanning healthcare and climate change.
Git: https://github.com/jeffnclark/TraCE
Submission Number: 58
Loading