MindAttention: Foveated Visual Encoding for Neural Response Synthesis and Concept-selective Region Localization

ICLR 2026 Conference Submission13295 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Brain Encoding
Abstract: Synthesizing brain activity via generative models to localize concept-selective cortical regions represents a promising advancement beyond traditional experimental paradigms. However, existing methods largely overlook the spatial selectivity of visual attention -- when visual stimuli contain multiple central targets. The spatial selectivity of human attention significantly reduces the signal intensity of unattended targets during neural encoding, leading to suppressed neural representations and consequently causing bias or failure in data-driven neural concept localization. To address this *synthesis-attention misalignment* problem, we propose *MindAttention*, a generative brain visual encoding framework that anchors concept representation to foveal gaze position. Grounded in the neuroscientific principle that only high-acuity foveal input reliably drives semantic-level cortical responses, we thereby construct a gaze-conditioned generator: simulated activation of a target concept is triggered only when the corresponding object falls within the foveal field. Experiments show that *MindAttention* significantly outperforms existing generative methods in localization accuracy. The incorporation of spatial attention constraints endows the framework with neuro-mechanistic interpretability and cognitive plausibility, establishing a more reliable and biologically grounded paradigm for data-driven exploration of brain concept maps.
Supplementary Material: zip
Primary Area: applications to neuroscience & cognitive science
Submission Number: 13295
Loading