Motion-Aware Surface Smoothing for Monocular Avatar Representations

05 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: 3D Gaussian Splatting, Avatar Modeling, Rendering, Geometry
Abstract: 3D Gaussian Splatting (3DGS) has become a popular representation for 3D avatar modeling due to its fast training and real-time rendering. However, the state-of-the-art methods struggle to generalize from sparse inputs and often fail to recover realistic geometry. We introduce a motion-aware surface smoothing framework to improve 3DGS for learning from monocular human videos. Our method regularizes the training of Gaussian parameters, modulates the Adaptive Density Control (ADC) for improving surface quality, and supervises Gaussian motions under unseen camera viewpoints. The enforcement of surface smoothness yielding superior geometry contours and higher-fidelity rendering. Across five public datasets, including MVHumanNet, DNA-Rendering, ActorsHQ and outdoor videos, our approach consistently outperforms prior methods in novel view synthesis, novel pose animation, and 3D shape reconstruction. Code will be published upon acceptance.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 2464
Loading