TL;DR: We propose an autoencoder based on Rate-Distortion Optimization. With our model, log-likelihood maximization is possible without ELBO.
Abstract: In the generative model approach of machine learning, it is essential to acquire an accurate probabilistic model and compress the dimension of data for easy treatment. However, in the conventional deep-autoencoder based generative model such as VAE, the probability of the real space cannot be obtained correctly from that of in the latent space, because the scaling between both spaces is not controlled. This has also been an obstacle to quantifying the impact of the variation of latent variables on data. In this paper, we propose a method to learn parametric probability distribution and autoencoder simultaneously based on Rate-Distortion Optimization to support scaling control. It is proved theoretically and experimentally that (i) the probability distribution of the latent space obtained by this model is proportional to the probability distribution of the real space because Jacobian between two spaces is constant: (ii) our model behaves as non-linear PCA, which enables to evaluate the influence of latent variables on data. Furthermore, to verify the usefulness on the practical application, we evaluate its performance in unsupervised anomaly detection and outperform current state-of-the-art methods.
Keywords: Autoencoder, Rate-distortion optimization, Generative model, Unsupervised learning, Jacobian
Original Pdf: pdf
10 Replies
Loading