Rethinking Graph Attention Networks: A New Robust Approach

ICLR 2026 Conference Submission21423 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph neural networks, graph representation learning, over- smoothing, graph attention network
Abstract: Graph Attention Networks (GAT) have achieved remarkable success for representation learning with graphs. However, their performance is significantly degraded due to the oversmoothing in which deep GAT leads to homogenized node representations. In this paper, we introduce a new quantitative measure of oversmoothing based on Mahalanobis distance. This measure provides a more robust assessment than conventional Euclidean metrics. Based on this insight, we propose the Mahalanobis Graph Attention Network (MGAT) to alleviate the oversmoothing issue. MGAT adds a Mahalanobis regularizer to reduce representation collapse and preserves inter-class separability. Extensive experiments on common benchmark datasets demonstrate the efficiency and superiority of our proposed model compared to the base GATs.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 21423
Loading