Towards Interpretable Visual Decoding with Attention to Brain Representations

Published: 23 Sept 2025, Last Modified: 09 Oct 2025NeurIPS 2025 Workshop BrainBodyFMEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Brain Decoding, fMRI, Stable Diffusion, Interpretability Analysis
TL;DR: The paper presents NeuroAdapter, which reconstructs images directly from brain activity and uses IBBI interpretability analysis to show how brain regions influence generation.
Abstract: Recent work has demonstrated that complex visual stimuli can be decoded from human brain activity using deep generative models. However, most current approaches rely on mapping brain data into intermediate image or text feature spaces before guiding the generative process, masking the effect of responses from different brain areas on the final reconstruction output. In this work, we propose \textit{NeuroAdapter}, a framework that directly conditions a latent diffusion model on brain representations, bypassing the need for intermediate feature spaces. Our method demonstrates competitive visual reconstruction quality on the Natural Scenes Dataset (NSD). To reveal how different cortical areas influence the unfolding generative trajectory, we contribute an Image–Brain BI-directional interpretability framework (\textit{IBBI}) which investigates cross attention mechanisms across diffusion steps. Our work enables new approaches for interpreting latent diffusion models through the lens of visual processing in the brain.
Submission Number: 51
Loading