Amuro & Char: Analyzing the Relationship between Pre-Training and Fine-Tuning of Large Language Models
Abstract: Large language model development relies on the pre-train-then-align paradigm, in which the model is typically pre-trained on a large text corpus and undergoes a tuning stage to align the model with human preference or downstream tasks.
We investigate the relationship between pre-training and supervised fine-tuning by considering multiple tasks as well as different pre-trained model checkpoints. Our results on 18 datasets and two models suggest that i) although the model benefits significantly through supervised fine-tuning, it may forget previously known domain knowledge and tasks that are not seen during fine-tuning; ii) the model exhibits high sensitivity to evaluation prompts after supervised fine-tuning, but this sensitivity can be alleviated through further pre-training; iii) continual pre-training improves the model in a latent way that manifests after fine-tuning; iv) The model can already solve some tasks after pre-training, while fine-tuning most benefits datasets where the model does not show capability during pre-training.
Paper Type: Long
Research Area: Interpretability and Analysis of Models for NLP
Research Area Keywords: Fine-tuning, Pre-training, Instruction Tuning, Training Dynamics
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 618
Loading