Diffusion-Guided Counterfactual Generation for Model Explainability

Published: 27 Oct 2023, Last Modified: 01 Dec 2023NeurIPS XAIA 2023EveryoneRevisionsBibTeX
Abstract: Generating counterfactual explanations is one of the most effective approaches for uncovering the inner workings of black-box neural network models and building user trust. While remarkable strides have been made in generative modeling using diffusion models in domains like vision, their utility in generating counterfactual explanations in structured modalities remains unexplored. In this paper, we introduce Structured Counterfactual Diffuser or SCD, the first plug-and-play framework leveraging diffusion for generating counterfactual explanations in structured data. SCD learns the underlying data distribution via a diffusion model which is then guided at test time to generate counterfactuals for any arbitrary black-box model, input, and desired prediction. Our experiments show that our counterfactuals not only exhibit high plausibility compared to the existing state-of-the-art but also show significantly better proximity and diversity.
Submission Track: Full Paper Track
Application Domain: Natural Language Processing
Survey Question 1: Our work is based on generating plausible explanations for a black-box model.
Survey Question 2: It is hard to interpret the behaviour of black-box models. Hence our work focusses on generating plausible counterfactual explanations to understand the behaviour of such models.
Survey Question 3: In this work, we generate counterfactual explanations to achieve explainability.
Submission Number: 50