Abstract: We propose an action-conditional human motion generation
method using variational implicit neural representations (INR). The variational formalism enables action-conditional distributions of INRs, from
which one can easily sample representations to generate novel human motion sequences. Our method offers variable-length sequence generation by
construction because a part of INR is optimized for a whole sequence of
arbitrary length with temporal embeddings. In contrast, previous works
reported difficulties with modeling variable-length sequences. We confirm that our method with a Transformer decoder outperforms all relevant methods on HumanAct12, NTU-RGBD, and UESTC datasets in
terms of realism and diversity of generated motions. Surprisingly, even
our method with an MLP decoder consistently outperforms the stateof-the-art Transformer-based auto-encoder. In particular, we show that
variable-length motions generated by our method are better than fixedlength motions generated by the state-of-the-art method in terms of realism and diversity. Code at https://github.com/PACerv/ImplicitMotion.
0 Replies
Loading