The Utility of “Even if” Semi-Factual Explanation to Optimize Positive OutcomesDownload PDFOpen Website

29 Sept 2023OpenReview Archive Direct UploadReaders: Everyone
Abstract: When users receive either a positive or negative outcome from an automated system, eXplainable AI (XAI) has almost exclusively focused on how to mutate negative outcomes into positive ones by crossing a decision boundary using counterfactuals (e.g., "If you earn 2k more, we will accept your loan application"). In this work, we instead focus on positive outcomes, and take the novel step of using XAI to optimize them (e.g., "Even if you wish to half your down-payment, we will still accept your loan application"). Explanations such as these that employ "even if" reasoning and do not cross a decision boundary are known as semi-factuals. To instantiate semi-factuals in this context, we introduce the concept of "gain" (i.e., how much a user stands to benefit from the proposed explanation), and consider the first causal formalization of semi-factuals. Tests on benchmark datasets show that our algorithms are better at maximizing gain compared to prior work, and that causality is especially important in the process. Most importantly however, a user study supports our main hypothesis by showing that people clearly find semi-factual explanations more useful compared to counterfactuals when they receive the positive outcome of a loan acceptance.
0 Replies

Loading