Vacant Holes for Unsupervised Detection of the Outliers in Compact Latent RepresentationDownload PDF

Published: 08 May 2023, Last Modified: 26 Jun 2023UAI 2023Readers: Everyone
Keywords: unsupervised outlier detection, deep generative models, VAEs, latent representation, compactness, Lipschitz continuity
TL;DR: A method for unsupervised outlier detection utilizing holes based on the compact latent space with constrained factors of variation
Abstract: Detection of the outliers is pivotal for any machine learning model deployed and operated in real-world. It is essential for the Deep Neural Networks that were shown to be overconfident with such inputs. Moreover, even deep generative models that allow estimation of the probability density of the input fail in achieving this task. In this work, we concentrate on the specific type of these models: Variational Autoencoders (VAEs). First, we unveil a significant theoretical flaw in the assumption of the classical VAE model. Second, we enforce an accommodating topological property to the image of the deep neural mapping to the latent space: compactness to alleviate the flaw and obtain the means to provably bound it within the determined limits by squeezing both inliers and outliers together. We enforce compactness using two approaches: Alexandroff extension and fixed Lipschitz continuity constant on the mapping of the encoder of the VAEs. Finally and most importantly, we discover that the anomalous inputs predominantly tend to land on the vacant latent holes within the compact space, enabling their successful identification. For that reason, we introduce a specifically devised score for hole detection and evaluate the solution against several baseline benchmarks achieving promising results.
Supplementary Material: pdf
Other Supplementary Material: zip
0 Replies

Loading