Abstract: In addition to its success in representation learning, contrastive learning is effective in image anomaly detection. Although contrastive learning depends significantly on data augmentation methods, time-series data augmentation for time-series anomaly detection is not investigated sufficiently. Additionally, although time-series data share a temporal context, the existing contrastive loss contrasts temporally related samples, in which deteriorated anomaly detection performance is observed on time-series data. Herein, we propose contrastive multivariate time-series anomaly detection (CTAD), a multivariate time-series anomaly detection framework that addresses these challenges by incorporating a one-class learning scheme into the contrastive loss based on meticulously designed time-series data augmentations. Specifically, we propose seven types of general time-series data augmentations to be applied variable- and point-wise, and provide guidance on data augmentation methods for contrastive time-series anomaly detection. The superiority of the one-class contrastive loss and the appropriate selection of time-series data augmentation allow CTAD to achieve outstanding performance in multiple datasets, even using a simple long short-term memory network. Furthermore, CTAD is robust to noise as it trains a noise-invariant network. This enables up to 47× faster and 20× more memory-efficient anomaly detection performance compared with existing methods while affording robustness, which are essential considerations in real-world applications.
Loading