Keywords: LLM, EEG, BCI, transformer
TL;DR: This paper introduces EEGTrans, a transformer-based generative model for sequentially generating synthetic EEG signals
Abstract: Recent advancements in Large Language Models (LLMs) have been significant, largely due to improvements in network architecture, particularly the transformer model. With access to large training datasets, LLMs can train in an unsupervised manner and still achieve impressive results in generating coherent output. This study introduces a transformer-based generative model, EEGTrans, designed for sequentially generating synthetic electroencephalogram (EEG) signals. Given the inherent noise in EEG data, we employ a quantized autoencoder that compresses these signals into discrete codes, effectively capturing their temporal features and enabling generalization across diverse datasets. The encoder of EEGTrans processes EEG signals as input, while its decoder autoregressively generates discrete codes. We evaluate our method in a motor imagery Brain-Computer Interface (BCI) application, where merging data across datasets is particularly challenging due to experimental differences. Our results demonstrate that the synthetic EEG data effectively captures temporal patterns while maintaining the complexity and power spectrum of the original signals. Moreover, classification results show that incorporating synthetic data improves performance and even surpasses that of models based on Generative Adversarial Networks. These findings highlight the potential of transformer-based generative models to generalize effectively across multiple datasets and produce high-quality synthetic EEG signals.
Supplementary Material: pdf
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 7280
Loading