Keywords: Gaussian Splatting, face avatar, editable avatars
TL;DR: We introduce a method for unsupervised 3D facial part segmentation and editing in Gaussian Splatting avatars, enabling fine-grained control like beard or mustache editing while preserving identity and quality.
Abstract: Facial editing is an important task with applications in entertainment, virtual reality, and digital avatars.
Most existing approaches rely on generative models in the 2D image domain, while in 3D the task is typically performed through labor-intensive manual editing.
We propose FaceParts, a framework for unsupervised segmentation and editing of Gaussian Splatting avatars. Unlike existing 2D or mesh-assisted methods, our approach operates directly in the Gaussian domain, decomposing avatars into semantically coherent facial parts without supervision. The method integrates feature disentanglement, density-based clustering, and FLAME-anchored part transfer, enabling precise editing and cross-avatar part swapping. Experiments on the NeRSemble dataset with 11 subjects demonstrate robust isolation of features such as beards, eyebrows, eyes, and mustaches. Quantitative evaluation confirms that transferred segments adapt dynamically to pose and expression, while maintaining identity consistency (ID = 0.943), low Average Expression Distance (AED = 0.021), and low Average Pose Distance (APD = 0.004).
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 5886
Loading