Differentiable Top-$k$ with Optimal TransportDownload PDF

Published: 12 Dec 2020, Last Modified: 05 May 2023LMCA2020 PosterReaders: Everyone
Keywords: Top-k, optimal transport, differentiable programming, end-to-end learning
TL;DR: We propose a smooth surrogate of the top-k operation, a model component that can be end-to-end learned. We apply it to deep kNN and beam search.
Abstract: The top-$k$ operation, i.e., finding the $k$ largest or smallest elements from a collection of scores, is an important model component, which is widely used in information retrieval, machine learning, and data mining. However, if the top-$k$ operation is implemented in an algorithmic way, e.g., using bubble algorithm, the resulting model cannot be trained in an end-to-end way using prevalent gradient descent algorithms. This is because these implementations typically involve swapping indices, whose gradient cannot be computed. Moreover, the corresponding mapping from the input scores to the indicator vector of whether this element belongs to the top-$k$ set is essentially discontinuous. To address the issue, we propose a smoothed approximation, namely the SOFT (Scalable Optimal transport-based diFferenTiable) top-$k$ operator. Specifically, our SOFT top-$k$ operator approximates the output of the top-$k$ operation as the solution of an Entropic Optimal Transport (EOT) problem. The gradient of the SOFT operator can then be efficiently approximated based on the optimality conditions of EOT problem. We apply the proposed operator to the $k$-nearest neighbors and beam search algorithms, and demonstrate improved performance.
1 Reply

Loading