Interaction Asymmetry: A General Principle for Learning Composable Abstractions

Published: 30 Oct 2024, Last Modified: 07 Nov 2024CRL@NeurIPS 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: disentanglement, compositional generalization, identifiability, unsupervised learning, out-of-domain generalization
TL;DR: We propose interaction asymmetry ("parts of the same concept have more complex interactions than parts of different concepts") as a general principle for provable disentanglement and compositional generalization of concepts without supervision.
Abstract: Learning disentangled representations of concepts and re-composing them in unseen ways is crucial for generalizing to out-of-domain situations. However, the underlying properties of concepts that enable such disentanglement and compositional generalization remain poorly understood. In this work, we propose the principle of interaction asymmetry which states: "Parts of the same concept have more complex interactions than parts of different concepts". We formalize this via block diagonality conditions on the $(n+1)$th order derivatives of the generator mapping concepts to observed data, where different orders of "complexity" correspond to different $n$. Using this formalism, we prove that interaction asymmetry enables both disentanglement and compositional generalization. Our results unify recent theoretical results for learning concepts of objects, which we show are recovered as special cases with $n=0$ or $1$. We provide results for up to $n=2$, thus extending these prior works to more flexible generator functions, and conjecture that the same proof strategies generalize to larger $n$.
Submission Number: 27
Loading