MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language ProductionDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: Sign language translation has made significant strides; however, there is still no viable solution for directly generating sign sequences from spoken content, e.g., text or speech. This paper proposes a unified framework for continuous sign language production, easing communication between sign and non-sign language users. In particular, a sequence diffusion model, utilizing embeddings extracted from text or speech, is crafted to generate sign predictions step by step. Moreover, by formulating a joint embedding space for text, audio, and sign, we bind data from the three modalities and leverage the semantic consistency across modalities to provide informative feedback signals for the model training. This embedding-consistency learning strategy minimizes the reliance on triplet sign language data and ensures continuous model refinement, even with a missing audio modality. Experiments on How2Sign and PHOENIX14T datasets demonstrate that our model achieves competitive performance in producing signs from both speech and text. We will release our implementation code and demos.
Paper Type: long
Research Area: Multimodality and Language Grounding to Vision, Robotics and Beyond
Languages Studied: English, Sign Language
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview