MicroVSA: An Ultra-Lightweight Vector Symbolic Architecture-based Classifier Library for Always-On Inference on Tiny Microcontrollers

Published: 01 Jan 2024, Last Modified: 14 May 2025ASPLOS (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Artificial intelligence (AI) on tiny edge devices has become feasible thanks to the emergence of high-performance microcontrollers (MCUs) and lightweight machine learning (ML) models. Nevertheless, the cost and power consumption of these MCUs and the computation requirements of these ML algorithms still present barriers that prevent the widespread inclusion of AI functionality on smaller, cheaper, and lower-power devices. Thus, there is an urgent need for a more efficient ML algorithm and implementation strategy suitable for lower-end MCUs.This paper presents MicroVSA, a library of optimized implementations of a low-dimensional computing (LDC) classifier, a recently proposed variant of vector symbolic architecture (VSA), for 8, 16, and 32-bit MCUs. MicroVSA achieves up to 21.86x speedup and 8x less flash utilization compared to the vanilla LDC. Evaluation results on the three most common always-on inference tasks - myocardial infarction detection, human activity recognition, and hot word detection - demonstrate that MicroVSA outperforms traditional classifiers and achieves comparable accuracy to tiny deep learning models, while requiring only a few ten bytes of RAM and can easily fit in tiny 8-bit MCUs. For instance, our model for recognizing human activity from inertial sensor data only needs 2.46 KiB of flash and 0.02 KiB of RAM and can complete one inference in 0.85 ms on a 32-bit ARM Cortex-M4 MCU or 11.82 ms on a tiny 8-bit AVR MCU, whereas the RNN model running on a higher-end ARM Cortex-M3 requires 62.0 ms. Our study suggests that ubiquitous ML deployment on low-cost tiny MCUs is possible, and more study on VSA model training, model compression, and implementation techniques is needed to further lower the cost and power of ML on edge devices.
Loading