Keywords: LLM unlearning, circuit discovery, conjunctive normal form, interpretability
TL;DR: We use circuit discovery and CNF solving to design the localization for forget neurons and retain neurons in the LLM unlearning task.
Abstract: The LLM unlearning aims to eliminate the influence of undesirable data without affecting causally unrelated information.
This process typically involves using a **forget set** to remove target information, alongside a **retain set** to maintain non-target capabilities. While recent localization-based methods demonstrate promise in identifying important neurons to be unlearned, they fail to disentangle neurons responsible for forgetting undesirable knowledge or retaining essential skills, often treating them as a single entangled group. As a result, these methods apply uniform interventions, risking catastrophic over-forgetting or incomplete erasure of the target knowledge. To address this, we turn to circuit discovery, a mechanistic interpretability technique, and propose the **C**onflict-guided **L**ocalization for LLM A**U**nlearning fram**E**work (**CLUE**). This framework identifies the forget and retain circuit composed of important neurons, and then the circuits are transformed into conjunctive normal forms (CNF). The assignment of each neuron in the CNF satisfiability solution reveals whether it should be forgotten or retained. We then provide targeted fine-tuning strategies for different categories of neurons. Extensive experiments demonstrate that, compared to existing localization methods, CLUE achieves superior forget efficacy and retain utility through precise neural localization.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 188
Loading