Abstract: Generative methods have recently seen significant improvements by generating in a lower-dimensional latent representation of the data. However, many of the generative methods applied in the latent space remain complex and difficult to train. In this work, we introduce a new and simple generative method grounded in topology theory -– Generative Topological Networks (GTNs) -- that is designed to operate on lower-dimensional latent representations of the data. GTNs are simple to train – they employ a standard supervised learning approach and do not suffer from generative pitfalls such as mode collapse, posterior collapse or the need to pose constraints on the neural network architecture. We demonstrate the use of GTNs on several datasets, including MNIST, CelebA, CIFAR-10 and the Hands and Palm Images dataset by training GTNs on a lower-dimensional latent representation of the data. We show that GTNs can improve upon VAEs and that they are quick to converge, generating realistic samples in early epochs. Further, we provide several avenues for future research. For example, we show how GTNs can be adapted to accommodate for more sophisticated data distributions that include disconnected components. We also discuss potential algorithmic and architectural improvements that are worth investigating. Finally, we note that the topological observations provided herein may also be of value for other methods.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Gabriel_Loaiza-Ganem1
Submission Number: 4039
Loading