Simple and Effective Gradient-Based Tuning of Sequence-to-Sequence ModelsDownload PDF

Published: 16 May 2022, Last Modified: 05 May 2023AutoML 2022 (Late-Breaking Workshop)Readers: Everyone
Abstract: Recent trends towards training ever-larger language models have substantially improved machine learning performance across linguistic tasks. However, the huge cost of training larger models can make tuning them prohibitively expensive, motivating the study of more efficient methods. Gradient-based hyper-parameter optimization offers the capacity to tune hyper-parameters during training, yet has not previously been studied in a sequence-to-sequence setting. We apply a simple and general gradient-based hyperparameter optimization method to sequence-to-sequence tasks for the first time, demonstrating both efficiency and performance gains over strong baselines for both Neural Machine Translation and Natural Language Understanding (NLU) tasks (via T5 pretraining). For translation, we show the method generalizes across language pairs, is more efficient than Bayesian hyper-parameter optimization, and that learned schedules for some hyper-parameters can out-perform even optimal constant-valued tuning. For T5, we show that learning hyper-parameters during pretraining can improve performance across downstream NLU tasks. When learning multiple hyper-parameters concurrently, we show that the global learning rate can follow a schedule over training that improves performance and is not explainable by the `short-horizon bias' of greedy methods (Wu et al., 2018). We release the code used to facilitate further research.
Keywords: gradient-based HPO, NMT, T5, sequence-to-sequence, automated model tuning
One-sentence Summary: We apply a method for greedy gradient-based hyper-parameter optimization to sequence-to-sequence models for Neural Machine Translation and T5 language model pre-training, showing performance gains and demonstrating novel insights.
Reproducibility Checklist: Yes
Broader Impact Statement: Yes
Paper Availability And License: Yes
Code Of Conduct: Yes
Reviewers: Jared Lichtarge,lichtarge@google.com Shankar Kumar,shankarkumar@google.com
Main Paper And Supplementary Material: pdf
1 Reply

Loading