Keywords: EEG, Deep Learning
TL;DR: We explore the transferability of audio models and datasets to the EEG classification domain, significantly exceeding prior state-of-the-art performance in abnormal EEG classification.
Abstract: EEG signal analysis and audio processing, though distinct in application, share inherent structural similarities in their data patterns. Recognizing this parallel, our study pioneers the application of two renowned audio processing models, PaSST and LEAF, to the realm of EEG signal classification.
In our experiments, the adapted PaSST and LEAF models delivered exceptional performance on the Temple University Hospital Abnormal EEG Corpus (TUAB). Specifically, PaSST achieved an impressive accuracy of 95.7\%, while LEAF registered 94.0\%, both substantially outstripping previously established benchmarks. Such achievements underscore the potential of tapping into cross-domain models, particularly from the audio sector, for advancing EEG research.
Notably, while these larger audio models brought about unparalleled results, maximizing their capabilities required addressing the limitations of available EEG data volume. Thus, we introduced innovative pre-training strategies derived from diverse datasets, further enhancing the performance efficacy. With these refinements, PaSST reached a landmark accuracy of 96.1\% on the TUAB dataset, marking a significant stride forward in EEG signal processing.
By leveraging the intrinsic resemblances between EEG and audio signals, we have successfully repurposed these audio models. We recommend further work devoted to the exploration of the transferability of machine learning audio techniques to healthcare time series tasks.
Track: 4. AI-based clinical decision support systems
Registration Id: XXN948VNG7H
Submission Number: 296
Loading