Abstract: As Artificial Intelligence (AI) becomes integral to high-stakes applications, the need for interpretable and trustworthy decision-making tools is increasingly essential. Counterfactual Explanations (CFX) offer an effective approach, allowing users to explore “what if?” scenarios that highlight actionable changes for achieving more desirable outcomes. Existing CFX methods often prioritize select qualities, such as diversity, plausibility, proximity, or sparsity, but few balance all four in a flexible way. This work introduces DiPACE, a practical CFX framework that balances these qualities while allowing users to adjust parameters according to specific application needs. DiPACE also incorporates a penalty-based adjustment to refine results toward user-defined thresholds. Experimental results on real-world datasets demonstrate that DiPACE consistently outperforms existing methods Wachter, DiCE and CARE in achieving diverse, realistic, and actionable CFs, with strong performance across a
Loading