Hyperparameter autotuning of programs with HybridTuner

Published: 01 Jan 2023, Last Modified: 19 Jan 2025Ann. Math. Artif. Intell. 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Algorithms must often be tailored to a specific architecture and application in order to fully harness the capabilities of sophisticated computer architectures and computational implementations. However, the relationship between tuning parameters and performance is complicated and non-intuitive, having no explicit algebraic description. This is true particularly for programs such as GPU applications and compiler tuning, both of which have discrete and often nonlinear interactions between parameters and performance. After assessing a few alternative algorithmic configurations, we present two hybrid derivative-free optimization (DFO) approaches to maximize the performance of an algorithm. We demonstrate how we use our method to solve problems with up to 50 hyperparameters. When compared to state-of-the-art autotuners, our autotuner (a) reduces the execution time of dense matrix multiplication by a factor of 1.4x, (b) identifies high-quality tuning parameters in only 5% of the computational effort required by other autotuners, and (c) can be applied to any computer architecture. Our implementations of Bandit DFO and Hybrid DFO are publicly available at https://github.com/bsauk/HybridTuner.
Loading