everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Detecting anomalies in tabular data from various domains has become increasingly important in deep learning research. Simultaneously, the development of generative models has advanced, offering powerful mechanisms for detecting anomalies by modeling normal data. In this paper, we propose a novel method for anomaly detection in a one-class classification setting using a noise conditional score network (NCSN). NCSNs, which can learn the gradients of log probability density functions over many noise-perturbed data distributions, are known for their diverse sampling even in low-density regions of the training data. This effect can also be utilized, and thus, the NCSN can be used directly as an anomaly indicator with an anomaly score derived from a simplified loss function. This effect will be analyzed in detail. Our method is trained on normal behavior data, enabling it to differentiate between normal and anomalous behaviors in test scenarios. To evaluate our approach extensively, we created the world's largest benchmark for anomaly detection in tabular data with 49 baseline methods consisting of the ADBench benchmark and several more datasets from the literature. Overall, our approach shows state-of-the-art performance across the benchmark.