Keywords: brain encoding, fMRI, multi-modal models, multi-modal stimuli, Transformers, videos, speech, language
Abstract: Despite participants engaging in single modality stimuli, such as watching images or silent videos, recent work has demonstrated that multi-modal Transformer models can predict visual brain activity impressively well, even with incongruent modality representations. This raises the question of how accurately these multi-modal models can predict brain activity when participants are engaged in multi-modal stimuli. As these models grow increasingly popular, their use in studying neural activity provides insights into how our brains respond to such multi-modal naturalistic stimuli, i.e., where it separates and integrates information from different sensory modalities. We investigate this question by using multiple unimodal and two types of multi-modal models—cross-modal and jointly pretrained—to determine which type of models is more relevant to fMRI brain activity when participants were engaged in watching movies (videos with audio). We observe that both types of multi-modal models show improved alignment in several language and visual regions. This study also helps in identifying which brain regions process unimodal versus multi-modal information. We further investigate the impact of removal of unimodal features from multi-modal representations and find that there is additional information beyond the unimodal embeddings that is processed in the visual and language regions. Based on this investigation, we find that while for cross-modal models, their brain alignment is partially attributed to the video modality; for jointly pretrained models, it is partially attributed to both the video and audio modalities. The inability of individual modalities in explaining the brain alignment effectiveness of multi-modal models suggests that multi-modal models capture additional information processed by all brain regions. This serves as a strong motivation for the neuro-science community to investigate the interpretability of these models for deepening our understanding of multi-modal information processing in brain.
Primary Area: Neuroscience and cognitive science (neural coding, brain-computer interfaces)
Submission Number: 5298
Loading