Keywords: vector-symbolic architectures, in-memory computing, factorizers, disentangling perceptual representations, resonator networks, stochastic computing
TL;DR: This paper describes the role of stochasticity (at different levels) on how to efficiently factorize (i.e., disentangle) distributed representations
Abstract: One can exploit the compute-in-superposition capabilities of vector-symbolic architectures (VSA) to efficiently factorize high-dimensional distributed representations to the constituent atomic vectors. Such factorizers however suffer from the phenomenon of limit cycles. Applying noise during the iterative decoding is one mechanism to address this issue. In this paper, we explore ways to further relax the noise requirement by applying noise only at the time of VSA's reconstruction codebook initialization. While the need for noise during iterations proves analog in-memory computing systems to be a natural choice as an implementation media, the adequacy of initialization noise allows digital hardware to remain equally indispensable. This broadens the implementation possibilities of factorizers. Our study finds that while the best performance shifts from initialization noise to iterative noise as the number of factors increases from 2 to 4, both extend the operational capacity by at least $50\times$ compared to the baseline factorizer resonator networks. Our code is available at: https://github.com/IBM/in-memory-factorizer
Submission Number: 17
Loading