Keywords: Differentiable Weightless Neural Networks, Logic Gate Networks, Performance Comparison, Edge Computing, Internet of Things
TL;DR: Differentiable logic-gate and LUT networks can outperform traditional DNNs on edge devices, with faster inferences and lower power, but scaling beyond CNNs is hard due to large parameter counts and training times, hindering real-world, large models.
Abstract: Miniaturizing Machine Learning (ML) models to operate accurately in resource-constrained environments may improve the intelligence of everyday objects. Applications abound across the Internet of Things (IoT), for personal medical devices, and for consumer electronics such as smartphones and augmented reality glasses. Differentiable Weightless Neural Networks (DWNN), such as differentiable Logic Gate Networks (LGN) and Look Up Table networks (LTN), represent a class of models which accelerate ML inference by orders of magnitude while retaining high predictive accuracy. While small-scale LGNs and LTNs already expedite model inference and reduce resource usage, performance and robustness benchmarks are not well reported which hinders the development of large architectures suitable for real-world applications. This paper fills this gap by comparing LGNs, LTNs, Multi-Layer Perceptrons (MLPs), and their convolutional counterparts on the basis of test accuracy, training time, and robustness to noise across key model and training variations. We introduce the Look-Up Table Convolutional Network (LTCNN), which reduces training time by 2–4X compared to prior logic-based convolutions. By benchmarking over 4,000 models, we identify critical scaling limitations: unlike standard CNNs, increasing DWNN parameter counts tends to exacerbate brittleness to environmental noise rather than improving generalization. Furthermore, while employing learnable interconnects improves accuracy by 6–20\%, it incurs a 3X computational penalty. These results quantify the distinct trade-offs of weightless architectures, highlighting the need for training strategies to scale DWNNs for real-world applications.
Primary Area: datasets and benchmarks
Submission Number: 7911
Loading