Abstract: Reconstructing posed 3D human models from monocular
images has important applications in the sports industry,
including performance tracking, injury prevention and vir-
tual training. In this work, we combine 3D human pose and
shape estimation with 3D Gaussian Splatting (3DGS), a rep-
resentation of the scene composed of a mixture of Gaussians.
This allows training or fine-tuning a human model predic-
tor on multi-view images alone, without 3D ground truth.
Predicting such mixtures for a human from a single input
image is challenging due to self-occlusions and dependence
on articulations, while also needing to retain enough flexi-
bility to accommodate a variety of clothes and poses. Our
key observation is that the vertices of standardized human
meshes (such as SMPL) can provide an adequate spatial den-
sity and approximate initial position for the Gaussians. We
can then train a transformer model to jointly predict compar-
atively small adjustments to these positions, as well as the
other 3DGS attributes and the SMPL parameters. We show
empirically that this combination (using only multi-view su-
pervision) can achieve near real-time inference of 3D human
models from a single image without expensive diffusion mod-
els or 3D points supervision, thus making it ideal for the
sport industry at any level. More importantly, rendering is
an effective auxiliary objective to refine 3D pose estimation
by accounting for clothes and other geometric variations.
The code is available at https://github.com/prosperolo/GST.
Loading