DiPACE: Diverse, Plausible and Actionable Counterfactual Explanations

Jacob Sanderson, Hua Mao, Wai Woo

Published: 25 Feb 2025, Last Modified: 11 Nov 2025Proceedings of the 17th International Conference on Agents and Artificial IntelligenceEveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: As Artificial Intelligence (AI) becomes integral to high-stakes applications, the need for interpretable and trustworthy decision-making tools is increasingly essential. Counterfactual Explanations (CFX) offer an effective approach, allowing users to explore “what if?” scenarios that highlight actionable changes for achieving more desirable outcomes. Existing CFX methods often prioritize select qualities, such as diversity, plausibility, proximity, or sparsity, but few balance all four in a flexible way. This work introduces DiPACE, a practical CFX framework that balances these qualities while allowing users to adjust parameters according to specific application needs. DiPACE also incorporates a penalty-based adjustment to refine results toward user-defined thresholds. Experimental results on real-world datasets demonstrate that DiPACE consistently outperforms existing methods Wachter, DiCE and CARE in achieving diverse, realistic, and actionable CFs, with strong performance across a ll four characteristics. The findings confirm DiPACE’s utility as a user-adaptable, interpretable CFX tool suitable for diverse AI applications, with a robust balance of qualities that enhances both feasibility and trustworthiness in decision-making contexts.
Loading