Quantifying the redundancy between prosody and text

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Linguistic Theories, Cognitive Modeling, and Psycholinguistics
Submission Track 2: Speech and Multimodality
Keywords: Prosody, Psycholinguistics, Language Models, Information Theory
TL;DR: Using advanced language models, we estimate how much prosodic information is implicitly encoded in text and how much is redundant with the linguistic content
Abstract: Prosody---the suprasegmental component of speech, including pitch, loudness, and tempo---carries critical aspects of meaning. However, the relationship between the information conveyed by prosody vs. by the words themselves remains poorly understood. We use large language models (LLMs) to estimate how much information is redundant between prosody and the words themselves. Using a large spoken corpus of English audiobooks, we extract prosodic features aligned to individual words and test how well they can be predicted from LLM embeddings, compared to non-contextual word embeddings. We find a high degree of redundancy between the information carried by the words and prosodic information across several prosodic features, including intensity, duration, pauses, and pitch contours. Furthermore, a word's prosodic information is redundant with both the word itself and the context preceding as well as following it. Still, we observe that prosodic features can not be fully predicted from text, suggesting that prosody carries information above and beyond the words. Along with this paper, we release a general-purpose data processing pipeline for quantifying the relationship between linguistic information and extra-linguistic features.
Submission Number: 4831
Loading