Qualitative Assessment of diffusion-oriented human motion synthesis and control methods

CVPR 2024 Workshop HuMoGen Submission15 Authors

23 Mar 2024 (modified: 31 Mar 2024)CVPR 2024 Workshop HuMoGen Desk Rejected SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Human Motion Generation, HumanTrajectory Control, Motion Diffusion Models
TL;DR: Recent advancements, challenges, and promising diffusion-based models in the field of human motion generation and trajectory control.
Abstract: Human motion generation aims to generate natural human pose sequences and shows immense potential for computer animation. Substantial progress has been made recently in motion data collection technologies and motion generation methods, laying the foundation for increasing interest in human motion generation & control, especially for data-hungry deep architectures which were previously impossible to use in this scope, such as the diffusion-based models. However, despite the recorded progress, challenges remain due to the wide range of possible movements, human eye sensitivity to motion quality, and limited data availability leading to solutions either low-quality or limited in expressiveness. In this survey, we present a comparative assessment of newly-emerged diffusion-based architectures, which seem to be fully promising in the context of human motion synthesis among other generative approaches. We firstly provide an overview of diffusion models’ operation and discuss techniques for representing human motion, along with commonly used motion capture datasets. Subsequently, we refer to the specifics of the architectures we interfere with, followed by the qualitative comparison of the methods for two mainstream sub-tasks: text-conditioned human motion generation and human motion trajectory control. Finally, we offer insights and highlight unresolved issues, aiming to provide a comprehensive understanding of this evolving field and spark innovative solutions to its challenges.
Submission Number: 15
Loading