Further Analysis of Outlier Detection with Deep Generative ModelsDownload PDF

Published: 09 Dec 2020, Last Modified: 22 Oct 2023ICBINB 2020 SpotlightReaders: Everyone
Keywords: generative model, outlier detection
TL;DR: We argue that current evaluation practice of generative outlier detection needs modification, and results should be interpreted carefully.
Abstract: The recent, counter-intuitive discovery that deep generative models (DGMs) can frequently assign a higher likelihood to outliers has implications for both outlier detection applications as well as our overall understanding of generative modeling. In this work, we present a possible explanation for this phenomenon, starting from the observation that a model's typical set and high-density region may not conincide. From this vantage point we propose a novel outlier test, the empirical success of which suggests that the failure of existing likelihood-based outlier tests does not necessarily imply that the corresponding generative model is uncalibrated. We also conduct additional experiments to help disentangle the impact of low-level texture versus high-level semantics in differentiating outliers. In aggregate, these results suggest that modifications to the standard evaluation practices and benchmarks commonly applied in the literature are needed.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2010.13064/code)
1 Reply

Loading