FedBT: Effective and Robust Federated Unlearning via Bad Teacher Distillation for Secure Internet of Things

Published: 2025, Last Modified: 06 Jan 2026IEEE Internet Things J. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Smart Internet of Things (IoT) devices generate vast, distributed data, and their limited computational and storage capacities complicate data protection. Federated learning (FL) enables collaborative model training across clients, enhancing performance and protecting data privacy. The right to be forgotten (RTBF) raises the demand for precise data removal. Federated unlearning (FU) offers a solution for accurate data deletion in FL systems. Existing FU methods often struggle to simultaneously ensure effective data forgetting and preserve model generalization. To mitigate these challenges, an effective and robust FU framework has been proposed, which is based on the “Bad Teacher” knowledge distillation (KD), termed FedBT. First, the Bad Teacher KD guides the trained model to eliminate specific client contributions from the global model. Next, the frequency domain extracts the global model’s generalization components. Finally, orthogonal constraints are applied to the KD-generated gradients within the orthogonal subspace of these components, ensuring the gradients preserve the trained model’s generalization ability. FedBT eliminates the need to store historical records of parameter updates. Using orthogonal space constraints, the generalization ability of the trained model is safeguarded during unlearning. Extensive experiments on three datasets with various metrics show our method reduces accuracy by only 0.53% on MNIST, 0.26% on Fashion-MNIST, and 4.67% on CIFAR10, surpassing the best approach. Furthermore, FedBT obtains an unlearning performance that most closely approximates the results obtained from retraining from scratch. FedBT boosts IoT security by enabling the “forgetting” of certain client data, crucial for protecting user privacy and ensuring secure device interactions.
Loading