Beyond Federated Prototype Learning: Learnable Semantic Anchors with Hyperspherical Contrast for Domain-Skewed Data
Abstract: Federated prototype learning is in the spotlight as global prototypes are effective in enhancing the learning of local representation spaces, facilitating the ability to generalize the global model. However, when encountering domain-skewed data, conventional federated prototype learning is susceptible to two dilemmas: 1) Local prototypes obtained by averaging intra-class embedding carry domain-specific markers, the margins among aggregated global prototypes could be attenuated and detrimental to inter-class separation. 2) Local domain-skewed embedding may not exhibit a uniform distribution in Euclidean space, which is not conductive to the prototype-induced intra-class compactness. To address the two drawbacks, we go beyond conventional paradigm of federated prototype learning, and propose learnable semantic anchors with hyperspherical contrast (FedLSA) for domain-skewed data. Specifically, we eschew the pattern of yielding prototypes via averaging intra-class embedding and directly learn a set of semantic anchors aided by the global semantic-aware classifier. Meanwhile, the margins between anchors are augmented via pulling apart them, ensuring decent inter-class separation. To guarantee that local domain-skewed representations can be uniformly distributed, local data is projected into the hyperspherical space, and the intra-class compactness is achieved by optimizing the contrastive loss derived from the von Mises-Fisher distribution. Finally, extensive experimental results on three multi-domain datasets show the superiority of the proposed FedLSA compared to existing typical and state-of-the-state methods.
Loading