LUT Based Neural Networks as Neuro-Symbolic Systems

Published: 29 Aug 2025, Last Modified: 29 Aug 2025NeSy 2025 - Phase 2 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Weightless Neural Networks, Logic Neural Networks, LUT NNs, Neuro-Symbolic
Abstract: Although not a recent topic of research, neurosymbolic systems (NSAI) may have been one of the latest frontiers in artificial intelligence to catch the attention of a broader scientific community. One possibility for such interest could be that NSAI is a fundamental step towards artificial general intelligence (AGI). Many would argue that a tightly coupled integration of neural and symbolic paradigms would not be necessary, as the state-of-the-art of both sides could interact through a common interface. Others, however, may see the benefits of having a tight integration under the same computational substract. Such combination of skills comes, however, with a computational price, both in terms of memory and time costs. When it comes to the point of deploying these hybrid models in silicon, these computational costs may constitute a serious drawback, especially with respect to online learning. This work proposes the adoption of a family of weightless neural networks (WNNs) to bring neurosymbolic systems to the level of integrated circuits. WNNs are a distinct class of neural models which derive inspiration from the decoding process issued by the dendritic trees of biological neurons. Instead of weights and dot products to determine neural activity, they utilize look up tables (LUTs). An n-input LUT can hold any one of $2^{2^n}$ possible logic functions, resulting in significant learning capacity compared to models based on multiply add operations. WNNs are inherently low-energy and low latency since primarily only table lookup is involved, and can easily be prototyped/fabricated in hardware. Our initial FPGA prototypes of LUT node based WNNs with Counting Bloom filters, arithmetic-free hashing, and with bleaching consume 85-99\% fewer cycles and 80-95\% less energy compared to deep neural networks of the same accuracy. We have further improved WNNs by ensembles and pruning of LUT nodes, and ULEEN can excel over BNNs. Our recent research has created Differentiable Weightless Neural Networks (DWNs) using principles of Extended Finite Differences (EFD). We also employ Learnable Mapping, Learnable Reduction, and Spectral Regularization to improve the accuracy and reduce the model size and efficiency. On several workloads including Keyword Spotting and Anomaly Detection from MLPerfTiny, DWN provides ~10X throughput and better accuracy versus AMD/Xilinx FINN implementations. On 11 tabular datasets, DWN yielded more accuracy and higher throughput, but more notably yielded very tiny classifiers, smaller than the classifiers yielded by DiffLogicNet and Tiny Classifiers. In software implementations, DWN compares favorably to implementations from AutoGluon XGBoost/CatBoost/LightGBM/TabNN/NNFastAITab and Google TabNet. The most surprising observation was that on a few datasets the DWN training/input mapping methodology yielded near-zero hardware implementations, suggesting that DWNs have some unique ability in extracting symbols. DWNs can be considered as a symbol extractor, or it can be an ultra-fast ultra-thin neurosymbolic inference engine. The learnable input mapping can be considered similar to rule-based learning and the Look Up Table contents can be considered as the neural component. In DWN, the integration of explicit knowledge with that implicitly acquired, in a similar fashion to other weightless models, is the subject of ongoing research.
Track: Main Track
Paper Type: Extended Abstract
Resubmission: No
Software: https://github.com/alanbacellar/DWN
Publication Agreement: pdf
Submission Number: 11
Loading