Abstract: Deep generative networks provide a way to generalize complex multi-dimensional data such as 3D point clouds. In this work, we present a novel method that operates on depth images and with the use of geometric images is able to learn the representation of discrete 3D points based on variational autoencoders (VAE). Traditional VAE solutions failed to capture sharply compressed 3D data; however, with the constrained variational framework with additional hyperparameters, we managed to learn the representation of 3D data successfully. To do this, we applied a Bayesian optimization on the hyperparameter space of the VAE. The results were validated on a large scale of public data while the code and demos are available on the authors’ website: https://github.com/molnarszilard/GIPC_rele.
Loading