Deep Generative model with Hierarchical Latent Factors for Timeseries Anomaly DetectionDownload PDF

27 Sept 2021, 17:50 (modified: 22 Nov 2021, 16:26)DGMs and Applications @ NeurIPS 2021 PosterReaders: Everyone
Keywords: Anomaly detection, Latent Factors, Langevin Dynamics, Alternating Back-Propagation, Generative model, Time-series
TL;DR: We propose a Deep Generative model for multivariate time-series anomaly detection with a novel hierarchical latent factor space, trained with Alternating Back-Propagation where latent vectors are sampled with Langevin Dynamics.
Abstract: Multivariate time-series anomaly detection has become an active area of research in recent years, with Deep Learning models outperforming previous approaches on benchmark datasets. Among reconstruction-based models, almost all previous work has focused on Variational Autoencoders and Generative Adversarial Networks. This work presents DGHL, a new family of generative models for time-series anomaly detection, trained by maximizing the observed likelihood directly by posterior sampling and alternating gradient-descent. A top-down Convolution Network maps time-series windows to a novel hierarchical latent space, exploiting temporal dynamics to encode information efficiently. Despite relying on posterior sampling, it is computationally more efficient than current approaches, with up to 10x shorter training times than RNN based models. Our method outperformed other state-of-the-art models on four popular benchmark datasets. Finally, DGHL is robust to variable features between entities and accurate even with large proportions of missing values, settings with increasing relevance with IoT. We demonstrate the superior robustness of DGHL with novel occlusion experiments in this literature.
1 Reply

Loading