Memory-Efficient, Limb Position-Aware Hand Gesture Recognition using Hyperdimensional ComputingDownload PDF

Published: 07 Feb 2021, Last Modified: 05 May 2023tinyML 2021 RegularReaders: Everyone
Keywords: hyperdimensional computing, gesture recognition, multi-context classification
TL;DR: Sensor fusion enables memory-efficient storage of hyperdimensional computing classification parameters for gesture recognition in multiple limb positions.
Abstract: Electromyogram (EMG) pattern recognition can be used to classify hand gestures and movements for human-machine interface and prosthetics applications, but it often faces reliability issues resulting from limb position change. One method to address this is dual-stage classification, in which the limb position is first determined using additional sensors to select between multiple position-specific gesture classifiers. While improving performance, this also increases model complexity and memory footprint, making a dual-stage classifier difficult to implement in a wearable device with limited resources. In this paper, we present sensor fusion of accelerometer and EMG signals using a hyperdimensional computing model to emulate dual-stage classification in a memory-efficient way. We demonstrate two methods of encoding accelerometer features to act as keys for retrieval of position-specific parameters from multiple models stored in superposition. Through validation on a dataset of 13 gestures in 8 limb positions, we obtain a classification accuracy of up to $93.34\%$, an improvement of $17.79\%$ over using a model trained solely on EMG. We achieve this while only marginally increasing memory footprint over a single limb position model, requiring $8\times$ less memory than a traditional dual-stage classification architecture.
4 Replies

Loading