Properties from mechanisms: an equivariance perspective on identifiable representation learningDownload PDF

29 Sept 2021, 00:33 (edited 15 Mar 2022)ICLR 2022 SpotlightReaders: Everyone
  • Keywords: representation learning, equivariance, independent component analysis, ICA, autoencoders
  • Abstract: A key goal of unsupervised representation learning is ``inverting'' a data generating process to recover its latent properties. Existing work that provably achieves this goal relies on strong assumptions on relationships between the latent variables (e.g., independence conditional on auxiliary information). In this paper, we take a very different perspective on the problem and ask, ``Can we instead identify latent properties by leveraging knowledge of the mechanisms that govern their evolution?'' We provide a complete characterization of the sources of non-identifiability as we vary knowledge about a set of possible mechanisms. In particular, we prove that if we know the exact mechanisms under which the latent properties evolve, then identification can be achieved up to any equivariances that are shared by the underlying mechanisms. We generalize this characterization to settings where we only know some hypothesis class over possible mechanisms, as well as settings where the mechanisms are stochastic. We demonstrate the power of this mechanism-based perspective by showing that we can leverage our results to generalize existing identifiable representation learning results. These results suggest that by exploiting inductive biases on mechanisms, it is possible to design a range of new identifiable representation learning approaches.
  • One-sentence Summary: Representation learning is identifiable up to any equivariances of the (known) mechanisms that govern an environment's evolution.
4 Replies