Perceptogram: Visual Reconstruction from EEG Using Image Generative Models

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: visual-evoked potentials, electroencephalography, EEG, visual reconstruction, visual representations, spatiotemporal semantic map
TL;DR: We achieve SOTA reconstruction performance with simple linear model and we show the spatiotemporal EEG features relevant for visual semantics
Abstract: In this work, we reconstruct viewed images from EEG recordings with state-of-the-art quantitative reconstruction performance using a linear decoder that maps the EEG to image latents. We choose latent diffusion guided by CLIP embedding as the primary method of image reconstruction as it is currently the most effective at capturing visual semantics. We also explore reconstruction results from a latent space of PCA and ICA components, which capture luminance and hue-related information from the EEG. The linear model provides interpretable EEG features relevant for differentiating general semantic categories of the images. We create spatiotemporal semantic maps that reflect the temporal evolution of class-relevant semantic information over time.
Supplementary Material: pdf
Primary Area: applications to neuroscience & cognitive science
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 9562
Loading