Neuro-vector-symbolic architectures: Exploring computation in superposition for perception, reasoning, and combinatorial search

Published: 01 Jan 2023, Last Modified: 14 May 2025undefined 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Deep learning has greatly succeeded in various tasks thanks to increasingly large neural network models and curated datasets. However, these models, with hundreds of billion parameters, require immense computing and memory resources for training and inference, are data-hungry, and lack transparency and interpretability. As a viable solution, vector-symbolic architectures (VSA) are an emerging computational framework that takes inspiration from attributes of neuronal circuits, including high dimensionality, fully distributed holographic representation, and (pseudo)randomness. This thesis integrates VSA into advanced deep neural network models, yielding a neuro-vector-symbolic architecture (Neuro-VSA). Neuro-VSA enhances various aspects of modern AI models including communication, compression, search, perception and reasoning by exploiting VSA's computation in superposition concept which can be also utilized in nonlinear functions: 1) Communication. We introduce a multi-node perceptual system with over-the-air links, where information from multiple sensors is superposed into a single VSA vector, yielding effective source and channel coding. Moreover, the encoded vector can be directly fed into a VSA-based classifier at the receiver without any intermediate decoding, leading to efficient near-channel classification. 2) Model compression. We propose a new methodology that reduces the storage requirements of motor-imagery brain--computer interface (MI-BCI) models by up to 2.95x at iso-accuracy, leveraging the VSA superposition of many subject-specific models into one model. 3) Perception and reasoning. We present Neuro-VSA's capability in solving abstract visual reasoning tasks. Neuro-VSA exploits the powerful operators of VSA representations, serving as a common language between neural networks and symbolic AI. The efficacy of Neuro-VSA is demonstrated by solving Raven's progressive matrices datasets. Neuro-VSA achieves a new accuracy record of 88.1\% on I-RAVEN, and its tractable probabilistic abductive reasoning is up to two orders of magnitude faster than other neuro-symbolic approaches. 4) Combinatorial search. The combination of the semantic attributes forms a product space, which can be represented with VSA binding operations and dictionaries. We present a compute engine capable of efficiently factorizing bipolar dense VSA products by exploiting computation in superposition, nonlinear activation functions, and stochasticity. The resulting factorizer significantly enhances the operational capacity by up to five orders of magnitude. Moreover, we present another method for factorizing sparse block codes. It further enhances the dense bipolar factorizer by featuring faster convergence speed and more accurate integration with neural networks for combinatorial inference. 5) Computation of nonlinear functions in superposition. Finally, we propose new directions that take advantage of capacity-rich neural network models to lower the cost of inference by exploiting computation in superposition. Our method superposes an arbitrary number of inputs into a fixed-width VSA data structure, which can be processed by nonlinear functions in a single pass, leading to a speedup nearly proportional to the number of superposed items.
Loading