Abstract: Network traffic control represents a cornerstone technology in sixth-generation wireless networks (6G), encompassing critical components such as channel access, network routing, congestion control, and adaptive bitrate. The inherent heterogeneity of 6G networks, characterized by diverse services, nodes, and transmission links, leads to an exponential expansion of the state-action space, posing fundamental challenges for deep reinforcement learning (DRL) models, such as sample data scarcity, expressive capability limitation, and single-step error accumulation. To address these challenges, we introduce an innovative diffusion model enhanced DRL framework specifically designed for traffic control in 6G networks. The proposed framework integrates components such as a data synthesizer, a policy generator, and a trajectory planner, thereby providing a robust solution to dynamic traffic control. Furthermore, we present a case study that elucidates the unique advantages of our proposed framework in the exploration of action spaces and the achievement of quality of service (QoS) requirements, thereby validating its effectiveness and potential for practical implementation.
External IDs:dblp:journals/cm/ShiWPGTC25
Loading