Abstract: Fast and accurate online processing is essential for smooth prosthetic hand control with Surface Electromyography signals (sEMG). Although transformers are state-of-the-art deep learning models in signal processing, the self-attention mechanism at the core of their operations requires accumulating data for large time-windows. They are therefore not suited for online signal processing. In this paper, we use an attention mechanism with sliding windows that allows a transformer to process sequences element-by-element. Moreover, we increase the sparsity of the network using spiking neurons. We test the model on the NinaproDB8 finger position regression dataset. Our model sets its new state-of-the-art in terms of accuracy on NinaproDB8, while requiring only very short time windows of 3.5 ms at each inference step, and reducing the number of synaptic operations up to a factor of ×5.3 thanks to the spiking neurons. Our results hold great promises for wearable online sEMG processing systems for prosthetic hand control.
Loading