Keywords: Generative Model, Generalization, Combinatorial Generalization, Machine Learning, Manifold Learning, Representation Learning
Abstract: Combinatorial generalization (CG)—generalizing to unseen combinations of known semantic factors—remains a grand challenge in machine learning.
While symmetry-based methods are promising, they learn from observed data and thus fail at what we term $\textbf{symmetry generalization}$: extending learned symmetries to novel data.
We tackle this by proposing a novel framework that endows the latent space with the structure of a $\textbf{symmetric space}$, a class of manifolds whose geometric properties provide a principled way to extend these symmetries.
Our method operates in two steps: first, it imposes this structure by learning the underlying algebraic properties via the $\textbf{Cartan decomposition}$ of a learnable Lie algebra.
Second, it uses $\textbf{geodesic symmetry}$ as a powerful self-supervisory signal to ensure this learned structure extrapolates from observed samples to unseen ones.
A detailed analysis on a synthetic dataset validates our geometric claims, and experiments on standard CG benchmarks show our method significantly outperforms existing approaches.
Supplementary Material: zip
Primary Area: generative models
Submission Number: 17018
Loading