CSS-Net: Domain Generalization in Category-level Pose Estimation via Corresponding Structural Superpoints

Published: 01 Jan 2024, Last Modified: 15 Sept 2025ICME 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Category-level pose estimation is crucial for estimating the pose and size of unseen objects. Previous methods, mainly trained and tested on data with the same distribution, are limited in their ability to generalize to unseen domain data. For instance, when applied to new scenes or categories, frequent data collection and network training can be cumbersome. To address this issue, we propose a domain generalization method in category-level pose estimation based on structural superpoints, which is trained solely on simulated data and can generalize to unseen domain distributions in real datasets. Specifically, by extracting superpoints for structural correspondence in a self-supervised manner, our method achieves cross-domain data and cross-instance shape generalization. Accordingly, we designed a network and loss function, CoupleLoss, for regressing pose and size. Furthermore, we validated the effectiveness of our method on the wild6D and real275 datasets, achieving state-of-the-art results.
Loading