Supervised Fine-Tuning of Large Language Models on Human Demonstrations Through the Lens of Memorization

Published: 28 Oct 2023, Last Modified: 26 Nov 2023Instruction Workshop @ NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: supervised fine-tuning, large language models, memorization
Abstract: In recent years, the field of natural language processing (NLP) has witnessed remarkable advancements driven by the development of large language models (LLMs). Various techniques, such as instruction tuning, have emerged as crucial approaches, enhancing LLMs' adaptability to new tasks guided by instructional prompts. Meanwhile, the phenomenon of memorization within LLMs has garnered considerable attention. In this work, we delve into memorization within LLMs during supervised fine-tuning on human demonstrations and find a distinct pattern marked by initial memorization growth followed by stabilization, with different degrees of memorization observed across various tasks. An intriguing observation is the increase in validation perplexity, typically indicative of overfitting, does not result in lower generation quality. We probe deeper by examining the entropy derived from LLM's output probabilities, uncovering a consistent trend of decreasing entropy throughout training under both nucleus sampling and teacher forcing scenarios. This implies growing confidence within the LLM in generating output, while such output may deviate from the expected ground truth. Building upon our investigation, we propose a novel Memorization-Based Curriculum (MBC) learning approach. We leverage likelihood as a proxy for measuring memorization and employ it to construct a data distribution for sampling instances with replacement during supervised fine-tuning, emphasizing data with lower degrees of memorization. Evaluations using GPT-4 as a judge demonstrate the effectiveness of MBC in fine-tuning LLMs on human demonstrations.
Submission Number: 76
Loading