Beyond Data Filtering: Knowledge Localization for Capability Removal in LLMs

ICLR 2026 Conference Submission18099 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: machine unlearning, knowledge localization, gradient routing, capability removal, ai safety, llms, large language models
TL;DR: We localize unwanted LLM capabilities to specific parameters during training to later remove them.
Abstract: Large Language Models increasingly possess capabilities that carry dual-use risks. While data filtering has emerged as a popular pretraining-time mitigation, it faces significant challenges: labeling whether data is harmful is expensive at scale, and given improving sample efficiency with larger models, even small amounts of mislabeled content could give rise to dangerous capabilities. To address risks associated with mislabeled harmful content, prior work proposed Gradient Routing (Cloud et al., 2024) - a technique that localizes target knowledge into a dedicated subset of model parameters so they can later be removed. We explore an improved variant of Gradient Routing, which we call Selective GradienT Masking (SGTM), with particular focus on evaluating its robustness to label noise. SGTM zero-masks selected gradients such that target domain examples only update their dedicated parameters. We test SGTM's effectiveness in two applications: removing knowledge of a language from a model trained on a bilingual synthetic dataset, and removing biology knowledge from a model trained on English Wikipedia. In both cases SGTM provides better retain/forget trade-off in the presence of labeling errors compared to both data filtering and a previously proposed instantiation of Gradient Routing. Unlike shallow unlearning approaches that can be quickly undone through fine-tuning, SGTM exhibits strong robustness to adversarial fine-tuning, requiring 7 times more fine-tuning steps to reach baseline performance on the forget set compared to a traditional unlearning method (RMU). Our results suggest SGTM provides a promising pretraining-time complement to existing safety mitigations, particularly in settings where label noise is unavoidable.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 18099
Loading