A simple baseline for evaluating Expression Transfer and Anonymisation in Video TransferDownload PDFOpen Website

2021 (modified: 20 Nov 2022)ACII (Workshops and Demos) 2021Readers: Everyone
Abstract: Video-to-video synthesis methods provide increasingly accessible solutions for training models on privacy-sensitive and limited-size datasets frequently encountered in domains such as affect analysis. However, there are no existing baselines that explicitly measure the extent of reliable expression transfer or privacy preservation in the generated data. In this paper, we evaluate a general-purpose video transfer method, vid2vid, on these two key tasks: expression transfer and anonymisation of identities, as well as its suitability for training affect prediction models. We provide results that form a strong baseline for future comparisons, and further motivate the need for purpose-built methods for conducting expression-preserving video transfer. Our results indicate that a significant limitation of vid2vid's expression transfer arises from conditioning on facial landmarks and optical flow, which do not carry sufficient information to preserve facial expressions. Finally, we demonstrate that vid2vid can adequately anonymise videos in some cases, though not consistently, and that the anonymisation can be improved by applying random perturbations to input landmarks, at the cost of reduced expression transfer.
0 Replies

Loading