Generating Counterfactual Explanations Using Cardinality Constraints.

Published: 19 Mar 2024, Last Modified: 10 Apr 2024Tiny Papers @ ICLR 2024 PresentEveryoneRevisionsBibTeXCC BY 4.0
Keywords: explainability, machine learning, genetic algorithms
TL;DR: We propose to generate interpretable counterfactuals as model-agnostic explanations for machine learning models.
Abstract: Providing explanations about how machine learning algorithms work and/or make particular predictions is one of the main tools that can be used to improve their trusworthiness, fairness and robustness. Among the most intuitive type of explanations are counterfactuals, which are examples that differ from a given point only in the prediction target and some set of features, presenting which features need to be changed in the original example to flip the prediction for that example. However, such counterfactuals can have many different features than the original example, making their interpretation difficult. In this paper, we propose to explicitly add a cardinality constraint to counterfactual generation limiting how many features can be different from the original example, thus providing more interpretable and easily understantable counterfactuals.
Submission Number: 85
Loading