Abstract: Generative Adversarial Networks (GANs) aim to produce realistic samples by mapping a low-dimensional latent space with known distribution to a high-dimensional data space, exploiting an adversarial training mechanism. However, without an effective characterisation of the generative process, such models face significant challenges in training and architecture selection. In this work, we propose a Topological Data Analysis based approach, using persistent homology, which can provide such a characterisation, where topological information of a data manifold is summarised by its Persistent Diagram and the evolution of its topological features is tracked throughout training. Our approach is applied across multiple GAN architectures using two benchmark datasets, where we demonstrate that conventional metrics such as Fréchet Inception Distance and intrinsic dimension estimate cannot adequately capture the quality of generated samples. Instead, our results confirm that a topological description of the generative process within GANs successfully captures training convergence and mode collapse. Finally, the layer-wise topological analysis determines the role each layer plays in the generative process, and may provide future guidance for refinement of architectures. Code available at https://github.com/bcorrad/genfold25.git.
External IDs:doi:10.1016/j.neucom.2025.131976
Loading