Abstract: Data imbalance across clients in federated learning often
leads to different local feature space partitions, harming the
global model’s generalization ability. Existing methods either
employ knowledge distillation to guide consistent local
training or performs procedures to calibrate local models before
aggregation. However, they overlook the ill-posed model
aggregation caused by imbalanced representation learning.
To address this issue, this paper presents a cross-silo feature
space alignment method (FedFSA), which learns a unified
feature space for clients to bridge inconsistency. Specifically,
FedFSA consists of two modules, where the in-silo
prototypical space learning (ISPSL) module uses predefined
text embeddings to regularize representation learning, which
can improve the distinguishability of representations on imbalanced
data. Subsequently, it introduces a variance transfer
approach to construct the prototypical space, which aids
in calibrating minority classes feature distribution and provides
necessary information for the cross-silo feature space
alignment (CSFSA) module. Moreover, the CSFSA module
utilizes augmented features learned from the ISPSL module
to learn a generalized mapping and align these features
from different sources into a common space, which mitigates
the negative impact caused by imbalanced factors. Experimental
results from three datasets verified that FedFSA
improves the consistency between diverse spaces on imbalanced
data, which results in superior performance compared
to existing methods.
Loading