Recognizing Natural Images From EEG With Language-Guided Contrastive Learning

Published: 2025, Last Modified: 04 Nov 2025IEEE Trans. Neural Networks Learn. Syst. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Electroencephalography (EEG), known for its convenient noninvasive acquisition but moderate signal-to-noise ratio, has recently gained much attention due to the potential to decode image information. However, previous works have not delivered sufficient evidence of this task, primarily limited by performance and biological plausibility. In this work, we first introduce a self-supervised framework to demonstrate the feasibility of recognizing images from EEG signals. Contrastive learning is leveraged to align the representations of EEG responses with image stimuli. Then, language descriptions of the stimuli generated by large language models (LLMs) help guide learning core semantic information. With the framework, we attain significantly above-chance results on the THINGS-EEG2 dataset, achieving a top-1 accuracy of 19.7% and a top-5 accuracy of 51.5% in challenging 200-way zero-shot tasks. Furthermore, we conduct thorough experiments to resolve the human visual responses with EEG from temporal, spatial, spectral, and semantic perspectives. These results provide evidence of feasibility and plausibility regarding EEG-based image recognition, substantiated by comparative studies with the THINGS-Magnetoencephalography (MEG) dataset. The findings offer valuable insights for neural decoding and real-world applications of brain-computer interfaces (BCIs), such as health care and robot control. The code is available at https://github.com/eeyhsong/NICE-LLM.
Loading