Abstract: 2D image-based 3D model retrieval (IBMR) usually relies on abundant explicit supervision on 2D images, together with unlabeled 3D models to learn domain-aligned yet class-discriminative features for the retrieval task. However, collecting large-scale 2D labels is cost-effective and time-consuming. Therefore, we explore a challenging IBMR task, where only few-shot labeled 2D images are available while the rest of the 2D and 3D samples remain unlabeled. Limited annotation of 2D images further increases the difficulty of domain-aligned yet discriminative feature learning. Therefore, we propose cross-domain prototype contrastive loss (CPCL) for the few-shot IBMR task. Specifically, we capture semantic information to learn class-discriminative features in each domain by minimizing intra-domain prototype contrastive loss. Besides, we perform inter-domain transferable contrastive learning to align the features between instances and prototypes of the same class across domains. Comprehensive experiments on popular benchmarks, MI3DOR and MI3DOR-2, validate the superiority of CPCL.
Loading