Abstract: Brain-computer interface (BCI) technology has promising applications as an intuitive communication tool and in fields such as language rehabilitation. This study aims to decode human speech intentions by analyzing EEG signals recorded during actual and imagined speech. EEG data were collected using a 64 channels system, with preprocessing to remove artifacts. Automatic speech recognition (ASR) was used to extract precise speech onset times, generating time-specific speech annotations corresponding with EEG data. Pretrained Word2Vec embeddings were integrated to provide semantic context, combining neural signals with high-level linguistic features. Support vector machine (SVM), linear discriminant analysis (LDA) were employed for decoding. The results demonstrate that integrating speech annotations improves decoding accuracy, even for imagined speech, highlighting the potential of BCI technology for advanced applications in communication and rehabilitation.
External IDs:dblp:conf/bci3/JangPKYLJ25
Loading