Abstract: Graph-Level Anomaly Detection has garnered less attention than its node-level counterpart, especially in supervised settings, due to assumptions that anomalies are often unobservable. Recent studies have explored supervised Graph-Level Anomaly Detection, leveraging limited anomalous samples. While achieving remarkable performance, most of them address the inherent class imbalance by applying a customized loss function with weight adjustments and often over-emphasize anomaly-specific traits, overlooking their incomplete and biased information. To overcome these limitations, we introduce a novel training framework that does not necessitate sophisticated layers to capture anomalous patterns. Our approach introduces two modules that leverage anomalies to address the class imbalance and improve generalizability, enhancing overall learning while maintaining an emphasis on normal samples. Comprehensive experiments show that our strategy outperforms state-of-the-art methods in detection performance.
External IDs:dblp:conf/pakdd/XiaoLVG25
Loading