GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians
Abstract: We present GaussianAvatar, an efficient approach to cre-ating realistic human avatars with dynamic 3D appear-ances from a single video. We start by introducing animat-able 3D Gaussians to explicitly represent humans in var-ious poses and clothing styles. Such an explicit and ani-matable representation can fuse 3D appearances more effi-ciently and consistently from 2D observations. Our repre-sentation is further augmented with dynamic properties to support pose-dependent appearance modeling, where a dy-namic appearance network along with an optimizable feature tensor is designed to learn the motion-to-appearance mapping. Moreover, by leveraging the differentiable motion condition, our method enables a joint optimization of motions and appearances during avatar modeling, which helps to tackle the long-standing issue of inaccurate motion esti-mation in monocular settings. The efficacy of GaussianA-vatar is validated on both the public dataset and our col-lected dataset, demonstrating its superior performances in terms of appearance quality and rendering efficiency. The code and dataset are available at https://github.com/aipixel/GaussianAvatar.
Loading