everyone
since 04 Oct 2024">EveryoneRevisionsBibTeXCC BY 4.0
Co-speech gestures are essential to non-verbal communication, enhancing both the naturalness and effectiveness of human interaction. Although recent methods have made progress in generating co-speech gesture videos, many rely on explicit visual controls, such as pose images or TPS keypoint movements, which often lead to artifacts like inconsistent backgrounds, blurry hands, and distorted fingers. In response to these challenges, we present the Implicit Motion-Audio Coupling (IMAC) method for co-speech gesture video generation. IMAC strengthens audio control by coupling implicit motion parameters, including pose and expression, with audio inputs. Our method utilizes a two-branch framework that combines an audio-to-motion generation branch with a video diffusion branch, enabling realistic gesture generation without requiring additional inputs during inference. To improve training efficiency, we propose a two-stage slow-fast training strategy that balances memory constraints while facilitating the learning of meaningful gestures from long frame sequences. Furthermore, we introduce a large-scale dataset designed for co-speech gesture video generation and demonstrate that our method achieves state-of-the-art performance on this benchmark.