Is Scaling Learned Optimizers Worth It? Evaluating The Value of VeLO's 4000 TPU Months

NeurIPS 2023 Workshop ICBINB Submission24 Authors

Published: 27 Oct 2023, Last Modified: 08 Mar 2024ICBINB 2023EveryoneRevisionsBibTeX
Keywords: learning to optimize, training speedups, VeLO, benchmarking optimizers
TL;DR: Independent evaluation of VeLO, a "foundational" learned optimizer. The paper findings invalidate original VeLO claims
Abstract: We analyze VeLO (versatile learned optimzer, the largest scale attempt to train a general purpose ``foundational'' optimizer to date. VeLO was trained on thousands of machine learning tasks over 4000 TPU months with the goal of producing an optimizer capable of generalizing to new problems while being hyper-parameter free, and outperforming industry standards such as Adam. We independently evaluate VeLO on the MLcommons optimizer benchmark suite. We find that contrary to initial claims: (1) VeLO has a critical hyper-parameter that needs problem-specific tuning, (2) VeLO does not necessarily outperform competitors in quality of solution found, and (3) VeLO is not faster than competing optimizers at reducing the training loss. These observations call into question VeLO's generality and the value of the investment in training it.
Submission Number: 24
Loading