Optimizing Black-box Metrics with Iterative Example Weighting
Abstract: We consider learning to optimize a classification metric defined by a black-box function of
the confusion matrix. Such black-box learning settings are ubiquitous, for example, when the
learner only has query access to the metric of interest, or in noisy-label and domain adaptation
applications where the learner must evaluate the metric via performance evaluation using a
small validation sample. Our approach is to adaptively learn example weights on the training
dataset such that the resulting weighted objective best approximates the metric on the validation
sample. We show how to model and estimate the example weights and use them to iteratively
post-shift a pre-trained class probability estimator to construct a classifier. We also analyze the
resulting procedure’s statistical properties. Experiments on various label noise, domain shift, and
fair classification setups confirm that our proposal is better than the individual state-of-the-art
baselines for each application.
0 Replies
Loading