Abstract: Decoding the human brain has been a hallmark of neuroscientists and Artificial Intelligence researchers alike. Reconstruction of visual images from brain Electroencephalography (EEG) signals has garnered a lot of interest due to
its applications in brain-computer interfacing. This study
proposes a two-stage method where the first step is to obtain
EEG-derived features for robust learning of deep representations and subsequently utilize the learned representation
for image generation and classification. We demonstrate
the generalizability of our feature extraction pipeline across
three different datasets using deep-learning architectures
with supervised and contrastive learning methods. We have
performed the zero-shot EEG classification task to support
the generalizability claim further. We observed that a subject invariant linearly separable visual representation was
learned using EEG data alone in an unimodal setting that
gives better k-means accuracy as compared to a joint representation learning between EEG and images. Finally, we
propose a novel framework to transform unseen images into
the EEG space and reconstruct them with approximation,
showcasing the potential for image reconstruction from EEG
signals. Our proposed image synthesis method from EEG
shows 62.9% and 36.13% inception score improvement on
the EEGCVPR40 and the Thoughtviz datasets, which is better than state-of-the-art performance in GAN
Loading