LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty

Published: 23 Jun 2025, Last Modified: 23 Jun 2025Greeks in AI 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Vision, Machine Learning
TL;DR: This abstract has been accepted to be published in CVPR 2025 (https://cvpr.thecvf.com/virtual/2025/poster/33292).
Abstract: This paper, accepted at CVPR 2025, presents LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models, avoiding retraining from scratch. LoTUS smooths the prediction probabilities of the model up to an information-theoretic bound, mitigating its over-confidence stemming from data memorization. We evaluate LoTUS on Transformer and ResNet18 models against eight baselines across five public datasets. Beyond established MU benchmarks, we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining is impractical, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. The experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. Code: https://github.com/cspartalis/LoTUS.
Submission Number: 3
Loading