Keywords: confidence interval, performance estimation, bootstrapping
TL;DR: A comperative evaluation and a new method for constructing confidence intervals of model predictive performance in the context of AutoML
Abstract: Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the ``winner's curse", i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95\% CI interval include the true performance at least \%95 of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to multi-class, imbalanced, and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured. However, the results also point to open problems and challenges in producing accurate CIs of performance, particularly in the case of multi-class tasks.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Code And Dataset Supplement: zip
Optional Meta-Data For Green-AutoML: This blue field is just for structuring purposes and cannot be filled.
CPU Hours: 600
GPU Hours: 0
TPU Hours: 0
Evaluation Metrics: No
Submission Number: 17
Loading