Keywords: Brain encoding and decoding, Representation alignment, Autoencoder, Electrophysiology, Vision
Abstract: Modeling the bidirectional mapping between visual stimuli and neural activity is critical for both neuroscience and brain–computer interfaces (BCIs). Although significant progress has been made in independently addressing visual encoding and decoding, **unified latent representations for the bidirectional mapping remain lacking**. Here, we propose **BrainAE**, an autoencoder-based framework designed for both visual encoding and decoding. Contrastive alignment with image models drives the latent features **toward a shared representation space of visual stimuli and neural responses**. Once trained, the model supports **stimulus-to-brain encoding**, **brain-to-stimulus decoding**, and **whole-brain signal reconstruction**. We extensively evaluate the model on electrophysiology, including human electroencephalography (EEG) and magnetoencephalography (MEG), as well as macaque multi-unit spiking activity (MUA), spanning non-invasive and invasive recordings, macro- and micro-scales, and species. Results demonstrate competitive encoding and decoding performance, revealing spatial, temporal, and semantic patterns consistent with established neuroscience findings. BrainAE provides a methodological foundation for probing brain function and developing BCIs.
Primary Area: applications to neuroscience & cognitive science
Submission Number: 4240
Loading