Unsupervised Domain Adaptive Hand Mesh Reconstruction of 2D Images in the Wild

Published: 2025, Last Modified: 25 Jan 2026ICANN (2) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In recent years, single-view hand mesh reconstruction has seen significant advancements. While existing methods perform well on motion-capture datasets, the lack of accurately annotated in-the-wild datasets remains a critical challenge for supervised 3D hand mesh reconstruction. Additionally, due to domain gaps, directly applying existing models to in-the-wild datasets has yielded unsatisfactory results. To address these issues, we propose UnDAHand, which aims to transfer knowledge learned from motion-capture source domains to in-the-wild target domains in an unsupervised manner. Specifically, to bridge the domain gap between different datasets, we introduce a fine-to-coarse pseudo label update strategy, which utilizes a “teacher-for-teacher” correction process for generating pseudo labels. Furthermore, recognizing the importance of consistency learning and the diversity of data in enhancing model transferability and generalization, we develop an augmentation consistency learning module to capture more robust data representations. Experimental results demonstrate the effectiveness of our method in domain adaptation across various datasets and its robustness when applied to unlabeled in-the-wild data.
Loading