UnRe: Zero-Shot LLM Unlearning via Dynamic Contextual Retrieval

ICLR 2026 Conference Submission21441 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLM, Machine Unlearning, Retrieval Augmented Generation, Inference, Privacy
TL;DR: This paper proposes UnRe, a novel unlearning framework for LLMs that employs dynamic contextual retrieval from retrieval-augmented generation (RAG) while only leveraging the forget data.
Abstract: Inference-time machine unlearning with only the forget data, also known as zero-shot unlearning, is becoming increasingly important for bias mitigation, privacy preservation, copyright protection, etc. Most approaches in this domain focused on query updating, decoder modification, offline module training, or reverse-generation by the forget data. Recent works found that providing offline-prepared contexts can realize in-context unlearning. However, leveraging dynamic context (conditioned on real-time queries) to achieve zero-shot unlearning has not yet been explored, which has the potential to enforce context unlearning while preserving the performance of the original LLM. In this paper, we propose UnRe, a novel unlearning framework for LLMs that employs dynamic contextual retrieval from retrieval-augmented generation (RAG) while only leveraging the forget data. Specifically, UnRe dynamically updates contexts to guide the unlearning process in a zero-shot setting. During the inference, the user query is first leveraged for online membership inference to identify a query-specific forget set. Using this set, UnRe refines the embeddings of the retrieved chunks via gradient descent, producing adaptive contexts that steer the LLM toward a query-specific unlearned distribution. We evaluate UnRe on multiple unlearning benchmarks and show that UnRe not only outperforms existing zero-shot and context-based unlearning approaches, but also preserves the original model performance.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 21441
Loading