Empowering sign language communication: Integrating sentiment and semantics for facial expression synthesis
Abstract: Highlights•A sentiment-aware method capable of generating facial expressions for Sign Language using only textual data.•A designed graph convolutional decoder architecture that can be used in other face-related tasks such as Talking Head Animation.•An autoencoder model trained to apply the concept of FID to measure the quality of generated facial expressions for Sign Language Production.•Extensive experiments and an ablation study showing the effectiveness of our method in the generation of facial expressions for sign language.
Loading