TL;DR: We propose a method to disentangle sequence memorization and general language model capabilities during pretraining.
Abstract: Large language models are susceptible to memorizing repeated sequences, posing privacy and copyright concerns. A popular mitigation strategy is to remove memorized information from specific neurons post-hoc. However, such approaches have shown limited success so far. In a controlled setting, we show that the memorization of *natural* sequences (those that resemble linguistically plausible text) become *mechanistically entangled* with general language abilities, thereby becoming challenging to remove post-hoc. In this work, we put forward a new paradigm of MemSinks that promotes isolation of memorization by design. We leverage a sequence identifier to activate a unique set of memorization neurons for each sequence across repetitions. By analyzing the dynamics of learning and forgetting, we argue that MemSinks facilitates clean isolation of memorized content, making it easier to remove without compromising general language capabilities. We implement MemSinks at the billion-parameter and billion-token scale, and observe both effective isolation and strong generalization. To our knowledge, this is the first proof-of-concept on real data demonstrating that simultaneous generalization and isolation is achievable. We open-source our code at http://github.com/grghosal/MemSinks.
Lay Summary: In this paper, we examine how to train large language models (LLMs) so that it is easier to remove memorized information from them. While existing research has studied many post-hoc approaches for removing memorization, these methods often harm the model's general capabilities. We demonstrate this is particularly the case when the memorized sequences are similar to the rest of the training data. Our analysis suggests that in these cases, memorization is often very entangled in the model with its general abilities. Next, we study approaches to specifically train models to seperate memorization from their general abilities. To overcome this challenge, we introduce Memorization Sinks (MemSinks) which sets aside a specific part of the model to store memorized information. We demonstrate that Memorization Sinks both (a) achieves good general capability performance and (b) makes it significantly easier to remove memorization after training.
Link To Code: http://github.com/grghosal/MemSinks
Primary Area: Deep Learning->Large Language Models
Keywords: Memorization, Localization, Unlearning
Submission Number: 13672
Loading