Dynamic Support Information Mining for Category-Agnostic Pose Estimation
Abstract: Category-agnostic pose estimation (CAPE) aims to predict the pose in query images based on few support images with pose annotations. Existing methods achieve the localization of arbitrary keypoints through similarity matching between support keypoint features and query image features. However, these methods primarily focus on mining information from the query images, neglecting the fact that support samples with keypoint annotations contain category-specific fine-grained semantic information and prior structural information. In this paper, we propose a Support-based Dynamic Perception Network (SDPNet) for the robust and accurate CAPE. On the one hand, SDPNet models complex dependencies between support keypoints, constructing category-specific dynamic skeletons to guide interaction among query keypoints. On the other hand, the SDPNet extracts fine-grained semantic information from support samples, dynamically modulating the refinement process of query features. Our method outperforms previous State-Of-The-Art (SOTA) methods on public datasets by a large margin.
Loading