Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: efficient fine-tuning; large languag emodels
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Fine-tuning is widely applied in language language processing to adapt the model
for downstream tasks. However, as model sizes grow rapidly, fine-tuning the
full model is computationally expensive. Conventional studies mainly focused
on parameter-efficiency, but reducing the number of trainable parameters does
not translate to less backward computation and fine-tuning speedup. Parameter-
efficient fine-tuning still needs to perform complete backward pass to the foremost
layer to calculate required gradients. For example, Adapter reduces the trainable
parameters by 275× but the fine-tuning throughput is only 1.25× better. To achieve
real training throughput improvement, we propose LIFT: a Layer-wise fine-tuning
strategy that only learns one layer of the Transformer architecture at a time. This
approach not only reduces the number of trainable parameters but also improves
the finetuning throughput. We thoroughly evaluated the effectiveness of LIFT on
BERT, GPT, and LLaMA models. LIFT saves the fine-tuning memory by 3.7x and improves the throughput by 2.1x to 2.6x compared to full fine-tuning, while
maintaining the quality.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3470
Loading