2D Motion Generation Using Joint Spatial Information with 2CM-GPT

Published: 01 Jan 2025, Last Modified: 14 May 2025VISIGRAPP (2): VISAPP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Various methods have been proposed for generating human motion from text due to advancements in large language models and diffusion models. However, most research has focused primarily on 3D motion generation. While 3D motion enables realistic representations, the creation and collection of datasets using motion-capture technology is costly, and its application to downstream tasks, such as pose-guided human video generation, is limited. Therefore, we propose 2D Convolutional Motion Generative Pre-trained Transformer (2CM-GPT), a method for generating two-dimensional (2D) motion from text. 2CM-GPT is based on the framework of Mo-tionGPT, a method for 3D motion generation, and uses a motion tokenizer to convert 2D motion into motion tokens while learning the relationship between text and motion using a language model. Unlike MotionGPT, which utilizes 1D convolution for processing 3D motion, 2CM-GPT uses 2D convolution for processing 2D motion. This enables more effective capture of spa
Loading