EEG-ImageNet: An Electroencephalogram Dataset and Benchmarks with Image Visual Stimuli of Multi-Granularity Labels

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: EEG, visual stimuli, computer vision, multi-modality, object classification, image generation
TL;DR: a new, large, novel EEG dataset with image visual stimuli (and benchmarks)
Abstract: Exploring how brain activity translates into visual perception offers valuable insights into the biological visual system's representation of the world. Recent advancements have enabled effective image classification and high-quality reconstruction using brain signals obtained through Functional Magnetic Resonance Imaging (fMRI) or magnetoencephalography (MEG). However, the cost and bulkiness of these technologies hinder their practical application. In contrast, Electroencephalography (EEG) presents advantages such as ease of use, affordability, high temporal resolution, and non-invasive operation, yet it remains underutilized in related research due to a shortage of comprehensive datasets. To fill this gap, we introduce EEG-ImageNet, a novel EEG dataset featuring recordings from 16 participants exposed to 4000 images sourced from the ImageNet dataset. This dataset offers five times the number of EEG-image pairs compared to existing benchmarks. EEG-ImageNet includes image stimuli labeled with varying levels of granularity, comprising 40 images with coarse labels and 40 with fine labels. We establish benchmarks for both object classification and image reconstruction based on this dataset. Experiments with several commonly used models show that the best-performing models can achieve object classification with an accuracy around 60% and image reconstruction with two-way identification around 64%. These findings highlight the dataset's potential to enhance EEG-based visual brain-computer interfaces, deepen our understanding of visual perception in biological systems, and suggest promising applications for improving machine vision models.
Supplementary Material: zip
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9655
Loading