Abstract: Graph anomaly detection, aimed at identifying anomalous patterns that significantly differ from other nodes, has drawn widespread attention in recent years. Due to the complex topological structures and attribute information inherent in graphs, conventional methods often struggle to effectively identify anomalies. Deep anomaly detection methods based on Graph Neural Networks (GNNs) have achieved significant success. However, they face the challenge of not only obtaining limited neighborhood information but over-smoothing. Over-smoothing is the phenomenon where the representations of nodes gradually become similar and flattened across multiple convolutional layers, thereby limiting the comprehensive learning of neighborhood information. Therefore, we propose a novel anomaly detection framework, TransGAD, to address these challenges. Inspired by the Graph Transformer, we introduce a Transformer-based autoencoder. Treating each node as a sequence and its neighborhood as tokens in the sequence, this autoencoder captures both local and global information. We incorporate cosine positional encoding and masking strategy to obtain more informative node representations and leverage reconstruction error for improved anomaly detection. Experimental results on seven datasets demonstrate that our approach outperforms the state-of-the-art methods.
Loading