Automatic facial expressions, gaze direction and head movements generation of a virtual agentDownload PDF

Published: 25 Oct 2022, Last Modified: 05 May 2023GENEA Challenge & Workshop 2022 WorkshopproceedingReaders: Everyone
Keywords: Non-verbal behaviour, behaviour generation, embodied conversational agent, neural networks, adversarial learning, encoder-decoder
Abstract: In this article, we present two models to jointly and automatically generate the head, facial and gaze movements of a virtual agent from acoustic speech features. Two architectures are explored: a Generative Adversarial Network and an Adversarial Encoder-Decoder. Head movements and gaze orientation are generated as 3D coordinates, while facial expressions are generated using action units based on the facial action coding system. A large corpus of almost 4 hours of videos, involving 89 different speakers is used to train our models. We extract the speech and visual features automatically from these videos using existing tools. The evaluation of these models is conducted objectively with measures such as density evaluation and a visualisation from PCA reduction, as well as subjectively through a users perceptive study. Our proposed methodology shows that on 15 seconds sequences, encoder-decoder architecture drastically improves the perception of generated behaviours in two criteria: the coordination with speech and the naturalness. Our code can be found in : https://github.com/aldelb/non-verbal-behaviours-generation.
4 Replies

Loading