Keywords: GNN, Model Evaluation, Error Pattern
TL;DR: This paper study the structural error distribution of GNN
Abstract: Graph Neural Networks (GNNs) are a specialized family of neural networks designed to handle graph-structured data, enabling the modeling of complex relationships within graphs. Despite significant algorithmic improvements, the issue of performance evaluation for GNNs has largely been overlooked in the literature. A crucial but underexplored aspect of GNN evaluation is understanding how errors are distributed across the graph structure, which we refer to as the "structural error pattern." To the best of our knowledge, this paper is among the first to highlight the importance of paying attention to these error patterns, which are essential not only for model selection—especially in spatial applications where localized or clustered errors can signal critical issues—but also for providing algorithmic insights into the model’s performance. In this work, we introduce a novel mathematical framework that analyzes and differentiates evaluation metrics based on their sensitivity to structural error patterns. Through a thorough theoretical analysis, we identify the limitations of traditional metrics—such as accuracy and mean squared error—that fail to capture the complexity of these error distributions. To address these shortcomings, we propose a new evaluation metric explicitly designed to detect and quantify structural error patterns, offering deeper insights into GNN performance. Our extensive empirical experiments demonstrate that this metric enhances model selection and improves robustness. Furthermore, we show that it can be incorporated as a regularization method during training, leading to more reliable GNN predictions in real-world applications.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 3761
Loading