Implicit Shape Avatar Generalization across Pose and Identity

Guillaume Loranchet, Pierre Hellier, François Schnitzler, Adnane Boukhayma, João Regateiro, Franck Multon

Published: 2025, Last Modified: 05 Mar 2026Eurographics (Short Papers) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The creation of realistic animated avatars has become a hot-topic in both academia and the creative industry. Recent advancements in deep learning and implicit representations have opened new research avenues, particularly in enhancing avatar details with lightweight models. This paper introduces an improvement over the state-of-the-art implicit Fast-SNARF method to permit generalization to novel motions and shape identities. Fast-SNARF trains two networks: an occupancy network to predict the shape of a character in canonical space, and a Linear Blend Skinning network to deform it into arbitrary poses. However, it requires a separated model for each subject. We extend this work by conditioning both networks on an identity parameter, enabling a single model to generalize across multiple identities, without increasing the model's size, compared to Fast-SNARF.
Loading