Keywords: vector-symbolic architectures, in-memory computing, factorizers, resonator networks, stochastic computing
TL;DR: This paper describes how to efficiently factorize distributed representations with help of different levels of stochasticity
Abstract: To efficiently factorize high-dimensional distributed representations to the constituent atomic vectors, one can exploit the compute-in-superposition capabilities of vector-symbolic architectures (VSA).
Such factorizers however suffer from the phenomenon of limit cycles.
Applying noise during the iterative decoding is one mechanism to address this issue.
In this paper, we explore ways to further relax the noise requirement by applying noise only at the time of VSA's reconstruction codebook initialization.
While the need for noise during iterations proves analog in-memory computing systems to be a natural choice as an implementation media, the adequacy of initialization noise allows digital hardware to remain equally indispensable.
This broadens the implementation possibilities of factorizers.
Our study finds that while the best performance shifts from initialization noise to iterative noise as the number of factors increases from 2 to 4, both extend the operational capacity by at least $50\times$ compared to the baseline factorizer resonator networks.
Submission Number: 17
Loading