Bridging Semantics Across Modalities: Decoupled Representation Learning for Audio-Visual Speech Recognition

Linzhi Wu, Xingyu Zhang, Yakun Zhang, Changyan Zheng, Tiejun Liu, Liang Xie, Chengshi Zheng, Erwei Yin

Published: 01 Oct 2025, Last Modified: 10 Nov 2025Knowledge-Based SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Highlights•A unified speech recognition framework for noise robustness and unseen speakers.•Offering an insight into cross-modal linguistically semantic alignment and fusion.•The tailored constraints facilitate modality- and speaker-invariant representations.•Promising audio-visual speech recognition results can be obtained across datasets.
Loading