Multi-view Gait Video SynthesisOpen Website

2022 (modified: 16 Nov 2022)ACM Multimedia 2022Readers: Everyone
Abstract: This paper investigates a new fine-grained video generation task, namely Multi-view Gait Video Synthesis, where the generation model works on a video of a walking human of arbitrary viewpoint and creates multi-view renderings of the subject. This task is particularly challenging, as it requires synthesizing visually plausible results, while simultaneously preserving discriminative gait cues subject to identification. To tackle the challenge caused by the entanglement of viewpoint, texture, and body structure, we present a network with two collaborative branches to decouple the novel view rendering process into two streams for human appearances (texture) and silhouettes (structure), respectively. Additionally, the prior knowledge of person re-identification and gait recognition is incorporated into the training loss for more adequate and accurate dynamic details. Experimental results show that the presented method is able to achieve promising success rates when attacking state-of-the-art gait recognition models. Furthermore, the method can improve gait recognition systems by effective data augmentation. To the best of our knowledge, this is the first task to manipulate views for human videos with person-specific behavioral constraints.
0 Replies

Loading