FedRL: A Reinforcement Learning Federated Recommender System for Efficient Communication Using Reinforcement Selector and Hypernet Generator
Abstract: The field of recommender systems aims to predict users’ latent interests by analyzing their preferences and behaviors. However, privacy concerns about user data collection lead to challenges such as incomplete initial information and data sparsity. Federated learning has emerged to address these privacy issues in recommender systems. However, federated recommender systems face heterogeneity among edge devices regarding data features and sample sizes. Moreover, differences in computational and storage capabilities introduce communication overhead and processing delays during parameter aggregation at the third-party server. This article introduces a framework named FedRL, a reinforcement learning federated recommender system for efficient communication using Reinforcement Selector and Hypernet Generator, to address the proposed issues. The Reinforcement Selector dynamically selects participating edge devices and helps to maximize their use of local data resources. Meanwhile, Hypernet Generator optimizes communication bandwidth consumption during the federated learning parameter transmission, enabling rapid deployment and updates of new model architectures or hyperparameters. Furthermore, the framework incorporates item attributes as content embeddings in edge devices’ recommender models, enriching them with global information. Real-world dataset experiments demonstrate that the proposed solution balances recommender quality and communication efficiency. The code for this work is publicly available on GitHub: https://github.com/diyicheng/FedRL.
External IDs:dblp:journals/tors/DiSMGLW26
Loading