Evaluating the Disentanglement of Deep Generative Models through Manifold TopologyDownload PDF

Published: 12 Jan 2021, Last Modified: 05 May 2023ICLR 2021 PosterReaders: Everyone
Keywords: generative models, evaluation, disentanglement
Abstract: Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models. However, measuring disentanglement has been challenging and inconsistent, often dependent on an ad-hoc external model or specific to a certain dataset. To address this, we present a method for quantifying disentanglement that only uses the generative model, by measuring the topological similarity of conditional submanifolds in the learned representation. This method showcases both unsupervised and supervised variants. To illustrate the effectiveness and applicability of our method, we empirically evaluate several state-of-the-art models across multiple datasets. We find that our method ranks models similarly to existing methods. We make our code publicly available at https://github.com/stanfordmlgroup/disentanglement.
One-sentence Summary: Evaluate disentanglement of generative models by measuring manifold topology using persistent homology
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Supplementary Material: zip
Code: [![github](/images/github_icon.svg) stanfordmlgroup/disentanglement](https://github.com/stanfordmlgroup/disentanglement)
Data: [CelebA](https://paperswithcode.com/dataset/celeba), [CelebA-HQ](https://paperswithcode.com/dataset/celeba-hq), [dSprites](https://paperswithcode.com/dataset/dsprites)
16 Replies

Loading