GOODFIT: A Deep Learning Optimizer Fine Tuned for Fine Tuning

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: optimizer, finetuning
Abstract: We present GOODFIT, one of the first optimizers that has been specifically designed to operate on converged models that need to be incrementally fine-tuned on a new task/dataset. Unlike standard optimizers like SGD or Adam, which operate with minimal assumptions since the model weights might be randomly initialized, GOODFIT takes advantage of the additional structure of a converged model to regularize the optimization process for better results. GOODFIT uses a simple temporal gradient orthogonalization process to outperform traditional fine-tuning methods in a wide variety of settings, from long-tailed classification to large-scale motion prediction. And because GOODFIT is fully encapsulated within the logic of an optimizer, it can be trivially dropped into any model training pipeline with minimal engineering effort. We believe that a new class of fine-tuning optimizers like GOODFIT can help pave the way as fine-tuning and incremental training become more and more prevalent within modern deep learning, and practitioners move further and further away from expensively training models from scratch.
Supplementary Material: pdf
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8276
Loading