Leveraging Theoretical Tradeoffs in Hyperparameter Selection for Improved Empirical PerformanceDownload PDF

Published: 14 Jul 2021, Last Modified: 05 May 2023AutoML@ICML2021 PosterReaders: Everyone
Keywords: HPO, approximate ERM
TL;DR: We levarage theoretical tradeoffs in HPO for creating practical heuristics
Abstract: The tradeoffs in the excess risk incurred from data-driven learning of a single model has been studied by decomposing the excess risk into approximation, estimation and optimization errors. In this paper, we focus on the excess risk incurred in data-driven hyperparameter optimization (HPO) and its interaction with approximate empirical risk minimization (ERM) necessitated by large data. We present novel bounds for the excess risk in various common scenarios in HPO. Based on these results, we propose practical heuristics that allow us to improve performance or reduce computational overhead of data-driven HPO, demonstrating over $2 \times$ speedup with no loss in predictive performance in our preliminary results.
Ethics Statement: This work focuses on abstract statistical theory and the generic HPO problem and has no potential ethical issues.
Crc Pdf: pdf
Poster Pdf: pdf
Original Version: pdf
4 Replies

Loading