From Simulation to Practice: Generalizable Deep Reinforcement Learning for Cellular Schedulers

Published: 24 Sept 2025, Last Modified: 18 Nov 2025AI4NextG @ NeurIPS 25 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: deep reinforcement learning, 5G, 6G, scheduler
Abstract: Efficient radio packet scheduling remains one of the most challenging tasks in cellular networks, and while heuristic methods exist, practical deep learning–based schedulers that are 3GPP-compliant and capable of real-time operation in 5G and beyond are still missing. To address this, we first take a critical look at previous deep scheduler efforts. Secondly, we enhance State-of-the-Art (SoTA) deep Reinforcement Learning (RL) algorithms and adapt them to train our deep scheduler. In particular, we propose a novel combination of training techniques for Proximal Policy Optimization (PPO) and a new Distributional Soft Actor-Critic Discrete (DSACD) algorithm, which outperformed other variants tested. These improvements were achieved while maintaining minimal actor network complexity, making them suitable for real-time computing environments. Furthermore, entropy learning in SACD was fine-tuned to accommodate resource allocation action spaces of varying sizes. Our proposed deep schedulers exhibited strong generalization across different bandwidths, number of Multi-User MIMO (MU-MIMO) layers, and traffic models. Ultimately, we show that our pre-trained deep schedulers outperform their heuristic rivals in realistic and standard-compliant 5G system-level simulations.
Submission Number: 4
Loading