Original Pdf: pdf
Abstract: Triplet loss is commonly used in descriptor learning, where the performance heavily relies on mining triplets. Typical solution to that is first picking pairs of intra-class patches (positives) from the dataset to form batches, and then selecting in-batch negatives to construct triplets. For high-informativeness triplet collection, researchers mainly focus on mining hard negatives in the second stage, while they pay relatively less attention to constructing informative batches, i.e., matching pairs are often randomly sampled from the dataset. To address this issue, we propose AdaSample, an adaptive and online batch sampler, in this paper. Specifically, we sample positives based on their informativeness, and formulate our hardness-aware positive mining pipeline within a novel maximum loss minimization training protocol. The efficacy of the proposed method is demonstrated in several standard benchmarks, in which it results in a significant and consistent performance gain on top of the existing strong baselines. The source code and pretrained model will be released upon acceptance.
Keywords: Descriptor, Correspondence