Abstract: Counterfactual Explanation (CE) methods have gained traction as a means to provide recourse for users of AI systems. While widely explored in domains like medical images and self-driving cars, Graph Counterfactual Explanation (GCE) methods have received less attention. GCE explainers generate a new graph similar to the original but with a different outcome according to the underlying prediction model. Notably, generative machine learning methods have achieved remarkable success in generating images with a particular art style and natural language processing. In this study, we thoroughly examine the capabilities of Generative GCE methods. Specifically, we analyse G-CounteRGAN, a graph-specific adaptation of the CounteRGAN method, and compare its performance against other generative explainers and a selection of search- and heuristic-based explainers in the literature. Contrarily to heuristic-based methods, we remark that generative approaches are extremely useful to generate multiple counterfactuals by sampling the learned latent space on the training data.
Loading