Evaluating the Disentanglement of Deep Generative Models through Manifold TopologyDownload PDF

Sep 28, 2020 (edited Jan 25, 2021)ICLR 2021 PosterReaders: Everyone
  • Keywords: generative models, evaluation, disentanglement
  • Abstract: Learning disentangled representations is regarded as a fundamental task for improving the generalization, robustness, and interpretability of generative models. However, measuring disentanglement has been challenging and inconsistent, often dependent on an ad-hoc external model or specific to a certain dataset. To address this, we present a method for quantifying disentanglement that only uses the generative model, by measuring the topological similarity of conditional submanifolds in the learned representation. This method showcases both unsupervised and supervised variants. To illustrate the effectiveness and applicability of our method, we empirically evaluate several state-of-the-art models across multiple datasets. We find that our method ranks models similarly to existing methods. We make our code publicly available at https://github.com/stanfordmlgroup/disentanglement.
  • One-sentence Summary: Evaluate disentanglement of generative models by measuring manifold topology using persistent homology
  • Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
  • Supplementary Material: zip
16 Replies

Loading