Abstract: Designing more powerful feature representations has motivated the development of deep metric learning algorithms over the last few years. The idea is to transform data into a representation space where some prior similarity relationships between examples are preserved, e.g., distances between similar examples being smaller than those between dissimilar examples. While such approaches have produced some impressive results, they often suffer from difficulties in training. In this paper, we introduce an improved triplet-based loss for deep metric learning. Our method aims to minimize distances between similar examples, while maximizing distances between those that are dissimilar under a stochastic selection rule. Additionally, we propose a simple sampling strategy, which focuses on maintaining locally the similarity relationships of examples in their neighborhoods. This technique aims to reduce the local overlap between different classes in different parts of the embedded space. Experimental results on three standard benchmark data sets confirm that our method provides more accurate and faster training than other state-of-the-art methods.
0 Replies
Loading