Empowering sign language communication: Integrating sentiment and semantics for facial expression synthesis

Published: 01 Jan 2024, Last Modified: 12 Nov 2024Comput. Graph. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Highlights•A sentiment-aware method capable of generating facial expressions for Sign Language using only textual data.•A designed graph convolutional decoder architecture that can be used in other face-related tasks such as Talking Head Animation.•An autoencoder model trained to apply the concept of FID to measure the quality of generated facial expressions for Sign Language Production.•Extensive experiments and an ablation study showing the effectiveness of our method in the generation of facial expressions for sign language.
Loading