Variational Semantic Decomposition of Compositional Representations

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: vector symbolic architecture, semantic decomposition, compositional representation
Abstract: Humans exploit the world’s compositional structure by combining familiar concepts to form new ones. A key goal in artificial intelligence is to model this capability by learning compositional representations that reflect the structure of the world, and its inverse operation, semantic decomposition, which involves recovering meaningful constituents from composite representations to enable generalization. The binding operation provides a way to compose vector representations, but existing decomposition methods, such as resonator networks, are limited by non-differentiable dynamics, have no guarantees for convergence, and assume quasi-orthogonality of constituent factors. We introduce the Variational Resonator Network (VRN), which reframes decomposition as Bayesian inference. The VRN is a fully differentiable model derived from a variational free energy objective, which allows it to be integrated into neural architectures. It generalizes decomposition beyond representations generated from quasi-orthogonal bipolar vectors and is guaranteed to converge. Experiments show that VRN is comparable to the resonator network on decomposition accuracy when factors are quasi-orthogonal and outperforms it when factors are correlated or real, while also integrating naturally into probabilistic models.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Submission Number: 9878
Loading