Do Slides Help? Multi-modal Context for Automatic Transcription of Conference Talks

ACL ARR 2025 February Submission2309 Authors

14 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: State-of-the-art (SOTA) Automatic Speech Recognition (ASR) systems mainly rely on acoustic information while disregarding additional multimodal context. However, visual information are essential in disambiguation and adaptation. While most work focuses on speaker images to handle noise conditions, this work also focuses on integrating presentation slides for the use cases of scientific presentation. In a first, we create a benchmark for multi-modal presentation including an automatic analysis of transcribing domain-specific terminology. Next, we explore methods for augmenting speech models with multi-modal information. We mitigate the lack of datasets with accompanying slides by a suitable approach of data augmentation. Finally, we train a model using the augmented dataset, resulting in a relative reduction in word error rate of approximately 49%, across all words and 15%, for domain specific terms compared to the baseline model.
Paper Type: Long
Research Area: Speech Recognition, Text-to-Speech and Spoken Language Understanding
Research Area Keywords: Multimodal context, Speech Recognition, Data Augmmentation
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 2309
Loading