Tighter bounds lead to improved classifiersDownload PDF

Published: 21 Jul 2022, Last Modified: 05 May 2023ICLR 2017 PosterReaders: Everyone
Abstract: The standard approach to supervised classification involves the minimization of a log-loss as an upper bound to the classification error. While this is a tight bound early on in the optimization, it overemphasizes the influence of incorrectly classified examples far from the decision boundary. Updating the upper bound during the optimization leads to improved classification rates while transforming the learning into a sequence of minimization problems. In addition, in the context where the classifier is part of a larger system, this modification makes it possible to link the performance of the classifier to that of the whole system, allowing the seamless introduction of external constraints.
Conflicts: criteo.com
7 Replies

Loading