Anomaly Detection with Variational Autoencoders via Reconstruction Error of the Expected Latent RepresentationDownload PDF

Anonymous

27 Sept 2021 (modified: 05 May 2023)NeurIPS 2021 Workshop DGMs Applications Blind SubmissionReaders: Everyone
Keywords: Anomaly Detection, Variational Autoencoders, Expected Latent Representation, Reconstruction Error, Deep Learning, Generative Models
TL;DR: We leverage the stochastic nature of latent variables learned by variational autoencoders to define the expected latent representation and the reconstruction error of the expected latent representation, which we adopt to improve anomaly detection.
Abstract: A common approach to anomaly detection is to model normality and adopt the difference from normality as an anomaly measure. One approach to modeling normality is to introduce latent variables that are inferred from observed variables. Variational autoencoders are one such state of the art approach for incorporating latent variables. In this paper, we leverage the stochastic nature of the latent variables learned by variational autoencoders, as each point in the latent space is sampled from probability distributions parameterized during the learning process. We define the expected latent representation and the reconstruction error of the expected latent representation, which we adopt to improve anomaly detection via variational autoencoders. Results from evaluations on benchmark datasets produce superior results to single sample approximations of the expected reconstruction error, while producing competitive results to comparable anomaly detection techniques.
1 Reply

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview