Advanced MEG Analysis of Auditory and Linguistic Encoding in Spoken Language Processing

ICLR 2024 Workshop TS4H Submission6 Authors

Published: 08 Mar 2024, Last Modified: 31 Mar 2024TS4H PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Auditory encoding, Linguistic processing, Time-frequency decomposition, Computational neuroscience
TL;DR: Exploring brain processing of language using advanced encoding MEG techniques to compare auditory and linguistic neural strategies.
Abstract: In this work, we explore brain responses related to language processing using neural activity elicited from auditory stimuli and measured through Magnetoencephalography (MEG). We develop audio (i.e. stimulus)-MEG encoders using both time-frequency decompositions and latent representations based on wav2vec2 embeddings, and text-MEG encoders based on CLIP and GPT-2 embeddings, to predict brain responses from audio stimuli only. The analysis of MEG signals reveals a clear encoding pattern of the audio stimulus within the MEG data, highlighted by a strong correspondence between real and predicted brain activity. Brain regions where this correspondence was highest were lateral (vocal features) and frontal (textual features from CLIP and GPT-2 embeddings).
Submission Number: 6
Loading