GradML: A Gradient-based Loss for Deep Metric LearningDownload PDF

Published: 18 Oct 2021, Last Modified: 05 May 2023ICBINB@NeurIPS2021 PosterReaders: Everyone
Keywords: deep metric learning, gradient, deep neural networks
TL;DR: We propose a gradient based loss for deep metric learning.
Abstract: Deep metric learning (ML) uses a carefully designed loss function to learn distance metrics for improving the discriminatory ability for tasks like clustering and retrieval. Most loss functions are designed by considering the distance between the embeddings to induce certain properties without exploring how such losses would impact the movement of the said embeddings via their gradients during optimization. In this work, we analyze the gradients of various ML loss functions and propose a gradient-based loss for ML (GradML). Instead of directly formulating the loss, we first formulate the gradients of the loss and use them to derive the loss to be optimized. It has a simple formulation and lowers the computational cost as compared to other methods. We evaluate our approach on three datasets and find that the performance is data-dependent on properties like inter-class variance.
Category: Negative result: I would like to share my insights and negative results on this topic with the community, Stuck paper: I hope to get ideas in this workshop that help me unstuck and improve this paper
1 Reply

Loading