Towards Understanding Continual Factual Knowledge Acquisition of Language Models: From Theory to Algorithm
Keywords: language models, factual knowledge acquisition, continual pretraining
TL;DR: We analyze the parameter evolvement and interpret factual learning and forgetting behavior of language models in continual pretraining.
Abstract: Continual Pre-Training (CPT) is essential for enabling Language Models (LMs) to integrate new factual knowledge without erasing old.
While classical CPT techniques like data replay have become the standard paradigm, the mechanisms underlying how LMs acquire and retain facts over time, termed as continual Factual Knowledge Acquisition (cFKA), remain unclear.
In this work, we present a theoretical framework that characterizes the training dynamics of cFKA using a single-layer Transformer with linear attention, offering a unified explanation for the behavior of popular CPT methods.
Our analysis reveals that regularization-based methods merely adjust the convergence rate of parameters without altering the inherent forgetting tendency, whereas data replay methods shift convergence dynamics and stabilize pretrained knowledge.
Building on these insights, we propose a novel generative data replay approach, called **S**electing **T**okens via attenti**O**n **C**ontribution (STOC), which identifies influential factual snippets to guide replay generation.
Extensive experiments on both synthetic and real-world datasets validate our theoretical findings and demonstrate that STOC effectively enhances cFKA by mitigating catastrophic forgetting.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 3910
Loading