A Platform-Agnostic Physiological Signal Compression Approach for Resource-Constrained Computational Headwear
Abstract: Head-based signals such as EEG, EMG, EOG, and ECG collected by wearable systems play a pivotal role in clinical diagnosis, monitoring, and treatment of important brain disorder diseases. However, head-based signal processing systems often produce complex signals, making wearable inference for diagnosis impractical, especially when the signals are weak. This is common with head-worn sensors due to poor contact. Moreover, the real-time transmission of a large corpus of physiological signals over extended periods consumes significant power and time, limiting the viability of battery-dependent physiological monitoring headwear. To address these issues, this paper presents a deep-learning framework employing a variational autoencoder (VAE) for physiological signal compression to reduce wearables' computational complexity and energy consumption. Our approach achieves an impressive compression ratio of 1:585 specifically for spectrogram data, surpassing state-of-the-art compression techniques such as JPEG2000, H.264, Direct Cosine Transform (DCT), and Huffman Encoding, which do not excel in handling physiological signals. We validate the efficacy of the compressed algorithms using collected physiological signals from real patients in the clinic and deploy the solution on a commonly used embedded AI chip for headwear systems (i.e., ARM Cortex). The proposed framework achieves a 91% seizure detection accuracy, confirming the approach's reliability, practicality, and scalability.
Loading