Model-Free Decentralized Training for Deep Learning Based Resource Allocation in Communication Networks

Published: 2023, Last Modified: 05 Nov 2025EUSIPCO 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Decentralized deep learning (DL) based resource allocation (RA) in communication networks guarantees scalability and higher communication bandwidth efficiency compared to centralized RA. Although the RA is decentralized in such approaches, the policies are mostly trained in a centralized manner. In this paper, we investigate a decentralized model-free training approach based on zeroth-order optimization methods. Each user trains its individual policy-possibly with a unique structure-to guarantee the maximum global utility, e.g., sum rate (SR) of users. More importantly, during the training, the users need to share only scalar quantities with their neighbors, avoiding a large communication overhead. The training is also robust against a certain level of asynchrony between the users. The proposed approach relaxes the need for a computationally complex central server and offers the possibility for (re)training in dynamic environments in a model-free manner using the computational power at the edge. Numerical experiments show a competitive performance compared to centralized and federated training approaches.
Loading