CS-pFedTM: Communication-Efficient and Similarity-based Personalization with Tsetlin Machines

19 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Personalized Federated Learning, Communication Efficient, Tsetlin Machine
Abstract: Federated Learning has become a promising framework for preserving data privacy in collaborative training across decentralized data sources. However, the presence of data heterogeneity remains a significant challenge, impacting both the performance and efficiency of FL systems. To address this, we introduce CS-pFedTM (Communication-Efficient and Similarity-based Personalized Federated Learning with Tsetlin Machine), a method that addresses this challenge by jointly enforcing communication-aware resource allocation and heterogeneity-driven personalization. CS-pFedTM enforces communication budget feasibility through clause allocation and tailor personalization using clients' parameters similarity as a proxy for data heterogeneity. Experiments across multiple datasets show that CS-pFedTM consistently outperforms state-of-the-art personalized FL approaches, achieving at least $3.6\times$ lower upload cost, $5.58\times$ lower download cost, and $1.17\times$ higher runtime efficiency, while maintaining superior accuracy.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 18781
Loading