Fast Benchmarking of Asynchronous Multi-Fidelity Optimization on Zero-Cost Benchmarks

Published: 30 Apr 2024, Last Modified: 19 Jul 2024AutoML 2024 (ABCD Track)EveryoneRevisionsBibTeXCC BY 4.0
Keywords: AutoML, Hyperparameter Optimization, Black-Box Optimization, Asyn- chronous Optimization, Multi-Fidelity Optimization, Benchmarking
TL;DR: The paper presents a plug-in algorithm available as a Python package that allows large-scale benchmarking of asynchronous hyperparameter optimizers achieving 1000x speedup over the vanilla approach.
Abstract: While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its evaluation in the exact order. This work addresses this challenge by introducing a user-friendly Python package that facilitates efficient parallel HPO with zero-cost benchmarks. Our approach calculates the exact return order based on the information stored in file system, eliminating the need for long waiting times and enabling much faster HPO evaluations. We first verify the correctness of our approach through extensive testing and the experiments with 6 popular HPO libraries show its applicability to diverse libraries and its ability to achieve over 1000x speedup compared to a traditional approach. Our package can be installed via pip install mfhpo-simulator.
Submission Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Optional Meta-Data For Green-AutoML: This blue field is just for structuring purposes and cannot be filled.
CPU Hours: 12000
Evaluation Metrics: Yes
Submission Number: 12
Loading