Keywords: Graph Anomaly Detection, Few-Shot Learning, Adversarial Training
Abstract: Graph anomaly detection faces challenges of scarce labeled samples and concealed anomalous features. Although recent graph-based models have shown potential, their reliance on extensive supervisory signals limits their effectiveness in real-world scenarios. To address this issue, we propose the GradConf framework, which enables robust anomaly detection under extremely low supervision. This framework constructs a graph structure where nodes represent entities and edges denote associative relationships, enhancing model robustness through view enhancement and consistency learning. Based on this, our key contributions are as follows: (1) Proposing a Gradient-Confidence Aware Loss that dynamically balances positive and negative samples by combining global training gradients with instance-level confidence; (2) Designing a Pseudo-label Clustering Self-Correction module that iteratively optimizes pseudo-label quality via learnable clustering centers and a structure-aware self-correction mechanism; (3) Introducing a Logits Adversarial Perturbation strategy that injects perturbations in the logit space to improve the model's sensitivity to anomalies and generalization ability under low supervision. Experiments on five real-world datasets demonstrate that GradConf, using only a single pair of labeled samples, can achieve or even outperform fully supervised methods, verifying its effectiveness and practicality in graph anomaly detection.
Supplementary Material: zip
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 23948
Loading