Abstract: Local feature detection and description are essential preliminary tasks in a multitude of computer vision applications. Despite the prowess of deep neural networks in feature extraction, they still grapple with challenges in capturing globally invariant and robust features, especially in dynamic scenes and areas with simplistic and repetitive geometric structures. This paper introduces a multi-scale feature fusion framework, SPADesc, which addresses these challenges by leveraging dynamic weighted fusion (DWF) and semantic priors. We integrate convolutional and self-attention mechanisms to bolster local feature detection and description in complex environments. Our approach employs a Parallel Convolution and Attention (PCA) module to generate descriptors that encompass both local and global scales. Additionally, a Semantic-Guided (SG) module is employed to produce class-aware global mask information, which implicitly guides the selection of keypoints and descriptors. By incorporating a Semantically Weighted (SW) loss function, we enhance the robustness and discriminative power of the descriptors. Extensive experimental results across various visual tasks demonstrate significant performance improvements, highlighting the superior adaptability and precision of our proposed model. The code is available at https://github.com/Diffcc/SPADesc.
Loading