Keywords: Federated learning, Active learning, Information bottleneck, Mutual information, Explainable AI
Abstract: Federated learning (FL) enables collaborative model training on decentralized data while preserving privacy. Recently, explainable FL (XFL) has gained traction, aiming to generate semantically-rich latent representations that enhance interpretability of predictions. However, obtaining such representations typically requires large amounts of labeled data, which limits its applicability. Active learning, which reduces labeling cost by querying the most informative samples, is a promising solution. Existing federated active learning (FAL) methods mainly exploit model uncertainty for data selection. They mostly overlook the interactions and training dynamics of local and global models in data selection. This shortcoming can lead to suboptimal performance and reduced explainability in XFL settings. In this paper, we propose a novel explainable FAL framework - \underline{Fed}erated \underline{M}inimax \underline{A}ctive \underline{D}ata \underline{S}election (Fed-MADS). The method leverages the information bottleneck technique to analyze model training dynamics, wherein a variational distribution is introduced and proposed to be implemented using the global model, making the approach well suited to the XFL setting. Then, a minimax objective is designed to identify unlabeled data points exhibiting significant divergence between local and global models in both latent representations and predicted labels. Extensive experiments on four benchmark datasets demonstrate that our method significantly outperforms state-of-the-art FAL approaches, achieving superior performance with fewer labeled data points.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 12129
Loading