PDUDT: Provable Decentralized Unlearning under Dynamic Topologies

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: This paper investigates decentralized unlearning, aiming to eliminate the impact of a specific client on the whole decentralized system. However, decentralized communication characterizations pose new challenges for effective unlearning: the indirect connections make it difficult to trace the specific client's impact, while the dynamic topology limits the scalability of retraining-based unlearning methods. In this paper, we propose the first **P**rovable **D**ecentralized **U**nlearning algorithm under **D**ynamic **T**opologies called PDUDT. It allows clients to eliminate the influence of a specific client without additional communication or retraining. We provide rigorous theoretical guarantees for PDUDT, showing it is statistically indistinguishable from perturbed retraining. Additionally, it achieves an efficient convergence rate of $\mathcal{O}(\frac{1}{T})$ in subsequent learning, where $T$ is the total communication rounds. This rate matches state-of-the-art results. Experimental results show that compared with the Retrain method, PDUDT saves more than 99\% of unlearning time while achieving comparable unlearning performance.
Lay Summary: We study how to “unlearn” a specific participant’s data from a fully decentralized learning system without the heavy cost of retraining or extra communication. In decentralized training, devices exchange model updates over changing networks, making it hard to pinpoint and remove one client’s influence once learning is complete. Our solution, called PDUDT, lets every node erase a target client’s contribution simply by tuning its own updates—no extra communication or full replay of past state is needed. We prove that PDUDT’s outcome is statistically equivalent to perturbed retraining method, giving strong guarantees that the undesired influence is truly removed. After unlearning, PDUDT can quickly converge in subsequent training. In experiments, PDUDT matches the unlearning quality of naive retraining while reducing unlearning time by over 99%. This makes it a practical, scalable way to enforce data removal in real‐world decentralized learning.
Primary Area: Theory->Learning Theory
Keywords: Decentralized unlearning
Submission Number: 15498
Loading