GURU: Guided Unlearning via Residual Uncertainty in Regression Models

ICLR 2026 Conference Submission17272 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Machine Unlearning, Machine Learning, Regression Models, Security and Privacy
TL;DR: We propose GURU, the first distillation method for regression unlearning using residual-based Gaussian KL. With the new GRUM metric, GURU balances efficacy, utility, and efficiency across computer vision and natural language processing tasks.
Abstract: Machine Unlearning (MU) addresses the fundamental requirement of removing the influence of specific data samples from trained models, a need that arises in privacy-sensitive applications where individuals can request the deletion of their personal information in accordance with the ``right to be forgotten" principle. While unlearning has been actively studied in classification domains (computer vision, language modeling, speech), its extension to regression remains underexplored both methodologically and theoretically. In this work, we redefine unlearning and its evaluation in regression by building on recent advances in the broader field of MU. We propose GURU (Guided Unlearning via Residual Uncertainty), the first distillation-based method for regression unlearning. GURU derives predictive uncertainty directly from residual errors by scaling residuals with a confidence factor, thereby defining a Gaussian predictive distribution that captures both the model's prediction and its confidence, providing a probabilistic view of regression outputs. This formulation enables a closed-form Kullback–Leibler divergence objective between teacher models and the student. We validate GURU on four regression datasets spanning vision and language. We show that GURU achieves competitive performance in terms of efficacy (removal of target information), utility (preservation of retained knowledge), and efficiency (computational cost). In addition, we propose GRUM, a regression-aware extension of the Global Unlearning Metric (GUM) that jointly considers all previous principles.
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 17272
Loading