A Stricter Constraint Produces Outstanding Matching: Learning Reliable Image Matching with a Quadratic Hinge Triplet Loss NetworkDownload PDF

Apr 10, 2021 (edited May 15, 2021)GI 2021Readers: Everyone
  • Keywords: Image matching, HardNet, OANet, SIFT, Large scale, Challenging environments, Pose accuracy
  • Abstract: Image matching is widely used in many applications, such as visual-based localization and 3D reconstruction. Compared with traditional local features (e.g., SIFT) and outlier elimination methods (e.g., RANSAC), learning-based image matching methods (e.g., HardNet and OANet) show a promising performance under challenging environments and large-scale benchmarks. However, the existing learning-based methods suffer from noise in the training data and the existing loss function, e.g., hinge loss, does not work well in image matching networks. In this paper, we propose an end-to-end image matching method that with less training data to obtain a more accurate and robust performance. First, a novel data cleaning strategy is proposed to remove the noise in the training dataset. Second, we strengthen the matching constraints by proposing a novel quadratic hinge triplet (QHT) loss function to improve the network. Finally, we apply a stricter OANet for sample judgement to produce more outstanding matching. The proposed method shows the state-of-the-art performance when applied to the large-scale and challenging Phototourism dataset that also reported the 1st place in the CVPR 2020 Image Matching Challenges Workshop Track1 (unlimited keypoints and standard descriptors) using the reconstructed pose accuracy metric.
  • Confirm No Double Submission: Yes
  • Withdraw On Reject: Yes
5 Replies