CS-pFedTM: Communication-Efficient and Similarity-based Personalization with Tsetlin Machines

ICLR 2026 Conference Submission18781 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalized Federated Learning, Communication Efficient, Tsetlin Machine
Abstract: Federated Learning has become a promising framework for preserving data privacy in collaborative training across decentralized data sources. However, the presence of data heterogeneity remains a significant challenge, impacting both the performance and efficiency of FL systems. To address this, we introduce CS-pFedTM (Communication-Efficient and Similarity-based Personalization with Tsetlin Machines), a method that addresses this challenge by jointly enforcing communication-aware resource allocation and heterogeneity-driven personalization. CS-pFedTM enforces communication budget feasibility through clause allocation and tailor personalization using clients' parameters similarity as a proxy for data heterogeneity. To further improve scalability, CS-pFedTM integrates performance-based client selection and weight masking. Experiments demonstrate that CS-pFedTM consistently outperforms state-of-the-art personalized FL approaches, achieving at least $5.58\times$ communication savings and average improvements of $2.3\times$ in storage and $7.2\times$ in runtime efficiency, while maintaining competitive performance.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 18781
Loading