Abstract: Temporal Graph Neural Networks (TGNNs) are widely used to model dynamic systems where relationships and features evolve over time. Although TGNNs demonstrate strong predictive capabilities in these domains, their complex architectures pose significant challenges for explainability. Counterfactual explanation methods provide a promising solution by illustrating how modifications to input graphs can influence model predictions. To address this challenge, we present CoDy—Counterfactual Explainer for Dynamic Graphs—a model-agnostic, instance-level explanation approach that identifies counterfactual subgraphs to interpret TGNN predictions. CoDy employs a search algorithm that combines Monte Carlo Tree Search with heuristic selection policies, efficiently exploring a vast search space of potential explanatory subgraphs by leveraging spatial, temporal, and local event impact information. Extensive experiments against state-of-the-art factual and counterfactual baselines demonstrate CoDy's effectiveness, with improvements of 16% in AUFSC+ over the strongest baseline. Our code is available at: https://github.com/daniel-gomm/CoDy
Lay Summary: Many real-world systems, like social media posts, traffic flows, or patient health conditions, change constantly over time. To understand and predict these changes, researchers use powerful models such as Temporal Graph Neural Networks (TGNNs). While TGNNs make highly accurate predictions, they are often complex and difficult to interpret, making it hard to understand why a particular decision was made. To address this, we introduce CoDy, a tool designed to explain the reasoning behind a model’s prediction by identifying the most important events that influenced the outcome. CoDy does this by exploring different “what-if” scenarios, such as removing key events, and observing how the model’s prediction changes in response. CoDy uses a smart search strategy that combines space, time, and sensitivity to changes, allowing it to efficiently pinpoint the events that matter most. Our experiments show that CoDy provides clearer and more accurate explanations than other leading methods. This makes it a powerful tool for anyone seeking to understand the behavior of dynamic, evolving systems.
Link To Code: https://github.com/daniel-gomm/CoDy
Primary Area: Social Aspects->Accountability, Transparency, and Interpretability
Keywords: Explainability, Temporal Graphs, Graph Neural Networks
Submission Number: 8452
Loading