Mixture of Gradient: A Unified Enhancing Approach for Deep-Learning-Based Wireless Network Optimization
Abstract: Deep learning plays increasingly important role in future wireless network management and optimization. Existing training methods such as label-based supervised learning and label-free learning have inherent limitations. The performance of supervised learning is limited by labels, while label-free training methods require extensive exploration. To address these limitations, this article proposes a novel mixture of gradients (MoG) method, which integrates gradients from different sources within the training process in order to improve the convergence performance of neural networks (NNs). Particularly, MoG is a modular, plug-and-play solution requiring no structural modifications to existing NNs. Its implementation necessitates only minor modifications to the loss function, where the label-based supervised loss is combined with a label-free loss through weighted summation. The label-free loss can be either unsupervised loss or reinforcement learning loss. This flexibility allows seamless integration into nearly all NN-based methods, making it applicable to a wide range of wireless optimization problems with minimal implementation cost. Extensive simulations across multiple classic wireless scenarios demonstrate that MoG can significantly enhance the performance of NN decision-making, leading to higher transmission rates.
External IDs:doi:10.1109/jiot.2025.3559063
Loading