Keywords: Metric Learning, Deep Learning
Abstract: Computer vision problems often use deep learning models to extract features from images, also known as embeddings.
Moreover, the loss function used during training strongly influences the quality of the generated embeddings.
In this work, a loss function based on the decidability index is proposed to improve the quality of embeddings for the verification routine.
Our proposal, the D-loss, avoids some Triplet-based loss disadvantages such as the use of hard samples and tricky parameter tuning, which can lead to slow convergence.
The proposed approach is compared against the Softmax (cross-entropy), Triplets Soft-Hard, and the Multi Similarity losses in four different benchmarks: MNIST, Fashion-MNIST, CIFAR10 and CASIA-IrisV4.
The achieved results show the efficacy of the proposal when compared to other popular metrics in the literature. The D-loss computation, besides being simple, non-parametric and easy to implement, favors both the inter-class and intra-class scenarios. Our code will be available at GitHub.
Supplementary Material: zip
5 Replies
Loading