FedDOR: Orthogonal Initialization and Dual Regularization for Prototype Integrity in Heterogeneous Federated Learning
Keywords: Federated Learning, Heterogeneous, Prototype, Orthogonal Regularization
Abstract: Heterogeneous Federated Learning (HFL) has garnered significant attention for its potential to leverage decentralized data while preserving privacy. One fundamental challenge in HFL is how to drastically reduce the high communication cost of transmitting model parameters. Prototype-based HFL methods have recently emerged, which exchange only class-wise representations (prototypes) among heterogeneous clients to achieve model training. However, existing methods fail to maintain the semantic integrity of prototypes during the aggregation process, compromising the global model performance in HFL. To overcome the challenge of semantic degradation in prototype aggregation, we propose a novel HFL approach termed FedDOR, which leverages Dual Orthogonal Regularization (DOR) to learn consistent and discriminative prototypes. On the client-side, our key insight is orthogonally initializing prototype embeddings to impose a maximally separated and uniformly distributed prior geometry on the feature space, providing a consistent and optimal learning target. On the server-side, DOR enforces geometric constraints to explicitly minimize intra-class variance while enlarging inter-class separation. Extensive experiments demonstrate that FedDOR achieves superior accuracy over state-of-the-art methods by significant margins, while fully preserving the communication efficiency and privacy advantages of prototype-based federated learning.
Supplementary Material: zip
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 6929
Loading