Abstract: Variational Auto-Encoders optimise the parameters of a distribution that approximates the posterior distribution of some data. We focus on the case where the approximating distribution includes Gaussian distributions related to each datum. When these Gaussians are each defined on a high-dimensional space, it is often assumed that using full-rank covariance matrices would be prohibitively computationally expensive and would be prone to overfitting. In such settings, a parameterisation that constrains each covariance matrix to be diagonal is often adopted. We propose the use of approximations that offer the potential for alternative compromises between the computational expense, overfitting and accuracy of full-rank and diagonal covariances. More specifically, we propose using covariance matrices that involve a random projection of a full-rank covariance in a low-dimensional space. In this ablation study, we isolate the varying parameterisation from other techniques and assess the impact of the dimensionality of this low-dimensional space on both computational cost and accuracy in the context of MNIST, CIFAR-10 and Flowers-102. We observe that, for a finite number of training iterations, accuracy is maximised by the compromise that is neither equivalent to the full-rank covariance nor a diagonal covariance. We also identify that the computational cost fluctuates less than one might anticipate and that performance is improved with the parameterisation considering a random projection from a lower full-rank covariance.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=dgKGLH7xV0
Changes Since Last Submission: Fixed citations according to comment
Assigned Action Editor: ~Gabriel_Loaiza-Ganem1
Submission Number: 2513
Loading