Keywords: LLM Unlearning, Knowledge Distillation, Teacher–Student Learning, Utility Preservation, Data Efficiency, Robustness, Safety and Alignment
TL;DR: DUET distills a prompt-steered teacher into a student to reliably forget undesirable knowledge while preserving utility, delivering state-of-the-art forgetting, robustness, and orders-of-magnitude better data efficiency.
Abstract: LLM unlearning is a technique to remove the impacts of undesirable knowledge from the model without retraining from scratch, which is indispensable towards trustworthy AI. Existing unlearning methods face significant limitations: conventional tuning-based unlearning is computationally heavy and prone to catastrophic forgetting. In contrast, in-contextualized unlearning is lightweight for precise unlearning but vulnerable to prompt removal or reverse engineering attacks. In response, we propose Distilled Unlearning from an Efficient Teacher (DUET), a novel distillation-based unlearning method that combines the merits of these two lines of work. It learns a student model to imitate the behavior of a prompt-steered teacher that effectively refuses undesirable knowledge generation while preserving general domain knowledge. Comprehensive evaluations on existing benchmarks with our enriched evaluation protocols demonstrated that DUET achieves significantly superior performance in both forgetting and utility preservation, while being orders of magnitude more data-efficient than state-of-the-art unlearning methods.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 15315
Loading