Abstract: Sign Languages (SL) serve as the primary mode of communication for the Deaf and Hard of Hearing communities.
Deep learning methods for SL recognition and translation
have achieved promising results. However, Sign Language
Production (SLP) poses a challenge as the generated motions must be realistic and have precise semantic meaning.
Most SLP methods rely on 2D data, which hinders their realism. In this work, a diffusion-based SLP model is trained
on a curated large-scale dataset of 4D signing avatars and
their corresponding text transcripts. The proposed method
can generate dynamic sequences of 3D avatars from an unconstrained domain of discourse using a diffusion process
formed on a novel and anatomically informed graph neural network defined on the SMPL-X body skeleton. Through
quantitative and qualitative experiments, we show that the
proposed method considerably outperforms previous methods of SLP. This work makes an important step towards realistic neural sign avatars, bridging the communication gap
between Deaf and hearing communities.
Loading