Confirmation: our paper adheres to reproducibility best practices. In particular, we confirm that all important details required to reproduce results are described in the paper,, the authors agree to the paper being made available online through OpenReview under a CC-BY 4.0 license (https://creativecommons.org/licenses/by/4.0/), and, the authors have read and commit to adhering to the AutoML 2025 Code of Conduct (https://2025.automl.cc/code-of-conduct/).
Reproducibility: pdf
TL;DR: We analyse ordered transfer hyperparameter optimisation (OTHPO), a version of transfer learning for HPO where the tasks follow a sequential order.
Abstract: In many deployed settings, hyperparameters are retuned as more data are collected; for instance tuning a sequence of movie recommendation systems as more movies and rating are added. Despite this, transfer hyperparameter optimisation (HPO) has not been thoroughly analysed in this setting. We introduce ordered transfer hyperparameter optimisation (OTHPO), a version of transfer learning for HPO where the tasks follow a sequential order. Unlike for state-of-the-art transfer HPO, the assumption is that each task is most correlated to those immediately before it. We propose a formal definition and illustrate the key difference with standard transfer HPO approaches. We show how simple methods taking the order into account can outperform more sophisticated transfer methods by better tracking smooth shifts of the hyperparameter landscape. The ten benchmarks are in the setting of gradually accumulating data, as well as a separate real-world motivated optimisation problem, and are open sourced to foster future research on ordered transfer HPO.
Submission Number: 16
Loading