Abstract: Graph Neural Networks (GNNs) are widely used as the
engine for various graph-related tasks, with their effectiveness in analyzing
graph-structured data. However, training robust GNNs often
demands abundant labeled data, which is a critical bottleneck in
real-world applications. This limitation severely impedes progress
in Graph Anomaly Detection (GAD), where anomalies are inherently
rare, costly to label, and may actively camouflage their patterns
to evade detection. To address these problems, we propose
Context Refactoring Contrast (CRoC), a simple yet effective framework
that trains GNNs for GAD by jointly leveraging limited labeled
and abundant unlabeled data. Different from previous works, CRoC
exploits the class imbalance inherent in GAD to refactor the context
of each node, which builds augmented graphs by recomposing the attributes
of nodes while preserving their interaction patterns. Furthermore,
CRoC encodes heterogeneous relations separately and integrates
them into the message-passing process, enhancing the model’s
capacity to capture complex interaction semantics. These operations
preserve node semantics while encouraging robustness to adversarial
camouflage, enabling GNNs to uncover intricate anomalous cases.
In the training stage, CRoC is further integrated with the contrastive
learning paradigm. This allows GNNs to effectively harness unlabeled
data during joint training, producing richer, more discriminative
node embeddings. CRoC is evaluated on seven real-world GAD
datasets with varying scales. Extensive experiments demonstrate that
CRoC achieves up to 14% AUC improvement over baseline GNNs
and outperforms state-of-the-art GAD methods under limited-label
settings.
Loading