A resource-efficient method for repeated HPO and NAS problemsDownload PDF

Published: 14 Jul 2021, Last Modified: 05 May 2023AutoML@ICML2021 PosterReaders: Everyone
Keywords: HPO, NAS, multi-armed bandits, resource-efficiency, transfer learning, online learning
TL;DR: A method for HNAS identifying optimal configurations and up to 50% CPU time reduction when used on a sequence of similar tuning tasks.
Abstract: In this work we consider the problem of repeated hyperparameter and neural architecture search (HNAS).We propose an extension of Successive Halving that is able to leverage information gained in previous HNAS problems with the goal of saving computational resources. We empirically demonstrate that our solution is able to drastically decrease costs while maintaining accuracy and being robust to negative transfer. Our method is significantly simpler than competing transfer learning approaches, setting a new baseline for transfer learning in HNAS.
Ethics Statement: We believe that an efficient use of resources will be beneficial for all humans and animals living on this planet. In this work we show that for a really common task in industrial ML systems such the repeated tuning of the same model it is possible to obtain the same predictive performance using significantly less resources (half in some cases). Using less CPU time does not only mean using less energy but also less resources to build servers, network components, data centers, etc. In an era where machine learning is becoming more and more popular, it is important to make sure its usage will be as sustainable as possible.
Crc Pdf: pdf
Poster Pdf: pdf
Original Version: pdf
3 Replies

Loading