Abstract: Aligning brain activity with machine-learning models offers a way to understand how the brain represents the world, and such models relate to the brain. In this paper, we explore the functional correspondence between music-specific information and brain activity. Using fMRI, we reconstruct music either via retrieval or by conditioning the MusicLM generator on brain data. The generated music resembles the musical stimuli that human subjects experienced with respect to semantic properties such as genre, instrumentation, and mood. Using an encoding modeling analysis, we also demonstrate that semantic information derived from music, as well as information derived from purely textual descriptions of music stimuli, are represented in a largely overlapped region around the auditory cortex. Furthermore, we show that features from MusicLM predicted brain activity within the primary auditory cortex more accurately than the non-music models that do not specifically focus on music. How the brain represents high-level musical meaning is unclear. Here, the authors reconstruct heard music from fMRI data and show that both music- and text-derived semantic features are predictive of activity in the auditory cortex.
External IDs:doi:10.1038/s41467-025-66731-7
Loading