SCALER: Fast and Effective Graph Anomaly Detection via Dual-Level Synergistic Contrastive Learning

03 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Attributed Networks, Anomaly Detection, Contrastive Learning, Self-supervised
Abstract: Unsupervised graph anomaly detection (UGAD) is crucial for identifying anomalous behavior in graph-structured data. However, recent deep learning-based UGAD methods, while effective, suffer from long inference times due to neighborhood aggregation, which limits their applicability in real-world scenarios. Moreover, current contrastive-based approaches are constrained by the limitations of node–subgraph contrast and the limited use of edge-level contrastive signals. To address these issues, we propose SCALER, a self-supervised MLP-GNN learning framework that trains a structure-aware multilayer perceptron (MLP) for UGAD without requiring costly graph neural network (GNN) aggregation during inference. SCALER introduces a dual-level contrastive learning network that combines node-level and edge-level contrast to effectively guide MLP training. The edge-level contrastive strategy leverages rich relational information embedded in edges to enhance node representations and improve anomaly detection. Furthermore, a neighborhood entropy-guided anomaly score correction module is incorporated to further improve robustness against anomalous nodes with low neighborhood entropy. Extensive experiments on eight real-world benchmark datasets, including a large-scale OGB dataset, against thirteen state-of-the-art baselines demonstrate that SCALER significantly improves detection performance across three metrics, particularly achieving an average gain of 19.6\% in AUPRC, while reducing inference time to the order of seconds. These results validate the effectiveness and efficiency of SCALER.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 1223
Loading