Multi-modal Speech Transformer Decoders: When Do Multiple Modalities Improve Accuracy?

Published: 2025, Last Modified: 06 Jan 2026ICME 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Decoder-only discrete-token language models have recently achieved significant success in automatic speech recognition. However, systematic analyses of how different modalities impact performance in specific scenarios remain limited. In this paper, we investigate the effects of multiple modalities on speech recognition accuracy on both synthetic and real-world datasets. Our experiments suggest that: (1) Integrating more modalities can increase accuracy but the benefit depends on the amount of auditory noise. We also show for the first time the benefit of combining audio, image context, and lip information in one speech recognition model. (2) Images as a supplementary modality for speech recognition provide their greatest benefit at moderate audio noise levels; moreover, they exhibit a different trend compared to inherently synchronized modalities like lip movements. (3) Performance improves on both synthetic and real-world datasets when the most relevant visual information is filtered as a preprocessing step.
Loading