everyone
since 18 Jun 2025">EveryoneRevisionsBibTeXCC BY 4.0
To defend against privacy leakage of user data, differential privacy is widely used in federated learning, but it is not free. The addition of noise randomly disrupts the semantic integrity of the model and this disturbance accumulates with increased communication rounds. In this paper, we introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, designed to strike a trade-off between model utility and user privacy by letting clients "Collaborate with Each Other". Specifically, we perform efficient tensor low-rank proximal optimization on stacked local model parameters at the server, demonstrating its capability to flexibly truncate high-frequency components in spectral space. This capability implies that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space for different privacy settings and continuous training processes. Moreover, we improve the SOTA utility-privacy trade-off bound by order of $\sqrt{d}$, where $d$ is the input dimension. We illustrate our theoretical results with experiments on representative datasets and observe significant performance improvements and strict privacy guarantees under different privacy settings. The code is available at https://github.com/6lyc/FedCEO_Collaborate-with-Each-Other.
Protecting user privacy in collaborative AI training (federated learning) requires adding carefully designed noise. However, this noise can unevenly disrupt different parts of each device's learned knowledge over time – like obscuring facial features in one device's animal recognition model while blurring limb details in another's.
We introduce FedCEO, a new approach where devices "Collaborate with Each Other" under server coordination. FedCEO intelligently combines the complementary knowledge from all devices. When one device's understanding of a concept is disrupted by privacy protection, others help fill those gaps.
This CEO-like coordination gradually enhances semantic smoothness across devices as training progresses. The server blends the partial understandings into a coherent whole, allowing the global model to recover disrupted patterns while maintaining privacy. The result is significantly improved AI performance across diverse privacy settings and extended training periods.