How not to Lie with a Benchmark: Rearranging NLP LeaderboardsDownload PDF

Published: 18 Oct 2021, Last Modified: 05 May 2023ICBINB@NeurIPS2021 PosterReaders: Everyone
Keywords: benchmarking, NLP benchmarks, metrics, model evaluation, _____________________________________________________________________________________________
TL;DR: Averaging ML-benchmark results with arithmetic mean is bad. Let's use geometric and harmonic mean instead.
Abstract: Comparison with a human is an essential requirement for a benchmark for it to be a reliable measurement of model capabilities. Nevertheless, the methods for model comparison could have a fundamental flaw - the arithmetic mean of separate metrics is used for all tasks of different complexity, different size of test and training sets. In this paper, we examine popular NLP benchmarks' overall scoring methods and rearrange the models by geometric and harmonic mean (appropriate for averaging rates) according to their reported results. We analyze several popular benchmarks including GLUE, SuperGLUE, XGLUE, and XTREME. The analysis shows that e.g. human level on SuperGLUE is still not reached, and there is still room for improvement for the current models.
Category: Criticism of default practices: I would like to question some well-spread practice in the community
1 Reply

Loading