Multimodal Embeddings From Language Models for Emotion Recognition in the WildDownload PDFOpen Website

2021 (modified: 13 Nov 2023)IEEE Signal Process. Lett. 2021Readers: Everyone
Abstract: Word embeddings such as ELMo and BERT have been shown to model word usage in language with greater efficacy through contextualized learning on large-scale language corpora, resulting in significant performance improvement across many natural language processing tasks. In this work we integrate acoustic information into contextualized lexical embeddings through the addition of a parallel stream to the bidirectional language model. This multimodal language model is trained on spoken language data that includes both text and audio modalities. We show that embeddings extracted from this model integrate paralinguistic cues into word meanings and can provide vital affective information by applying these multimodal embeddings to the task of speaker emotion recognition.
0 Replies

Loading