Abstract: Time series anomaly detection has been received growing interest in industrial and academic communities due to its substantial theoretical value and practical significance in reality. Recent advanced methods for time series anomaly detection are based on deep learning techniques, since they have shown their superiority in some specific situations. However, most existing deep learning-based anomaly detection methods require predefined, specific tasks of reconstruction or prediction, necessitating task-specific loss functions. Designing such anomaly-aware loss functions poses a significant challenge due to the ambiguity in defining ground-truth anomalies. Moreover, these methods often rely on complex network architectures that tend to lead to over-generalization, resulting in even abnormal data being well reconstructed or fitted. To mitigate this situation, grounded in activation learning theory, we propose a novel unsupervised time series anomaly detection paradigm termed ALAD. ALAD utilizes a straightforward fully connected network architecture, measuring the typicality of input patterns through the sum of the squared output. Despite its simplicity, ALAD achieves competitive performance compared to state-of-the-art models trained using backpropagation. By utilizing various real-world and synthetic datasets, experimental results have confirmed the effectiveness and feasibility of the proposed paradigm. This work also demonstrates that biologically-plausible local learning can sometimes outperform backpropagation in real-world scenarios.
External IDs:dblp:journals/tbd/DingLBZZ25
Loading