Keywords: Large Language Models (LLMs), Machine Unlearning
TL;DR: Syntactic overlap drives benign relearning; syntactic diversification mitigates it for robust unlearning.
Abstract: Machine unlearning aims to remove specific content from trained models while preserving overall performance.
However, the phenomenon of benign relearning, in which forgotten information reemerges even from benign fine-tuning data, reveals that existing unlearning methods remain fundamentally fragile.
A common explanation attributes this effect to topical relevance, but we find this account insufficient.
Through systematic analysis, we demonstrate that syntactic similarity, rather than topicality, is the primary driver: across benchmarks, syntactically similar data consistently trigger recovery even without topical overlap, due to their alignment in representations and gradients with the forgotten content.
Motivated by this insight, we introduce syntactic diversification, which paraphrases the original forget queries into heterogeneous structures prior to unlearning.
This approach effectively suppresses benign relearning, accelerates forgetting, and substantially alleviates the trade-off between unlearning efficacy and model utility.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 369
Loading