On the Hardness of Computing Counterfactual and Semi-factual Explanations in XAI

TMLR Paper4998 Authors

30 May 2025 (modified: 30 May 2025)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Providing clear explanations to the choices of machine learning models is essential for these models to be deployed in crucial applications. Counterfactual and semi-factual explanations have emerged as two mechanisms for providing users with insights into the outputs of their models. We provide an overview of the computational complexity results in the literature for generating these explanations, finding that in many cases, generating explanations is computationally hard. We further contribute our own inapproximability results showing that not only are explanations often hard to generate, under certain assumptions they are also hard to approximate. We discuss the implications of these complexity results for the XAI community and for policymakers seeking to regulate explanations in AI.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Taylor_W._Killian1
Submission Number: 4998
Loading