Abstract: Deep reinforcement learning has gained increasing popularity in the beyond visual range engagement of unmanned combat aerial vehicles. Regardless of the success of the maneuvering decision, existing methods either utilize sparse rewards to optimize fire control or adopt a predefined fire policy, severely limiting the engagement performance. This paper presents a supervision-enhanced technique to address the challenges of the hybrid action space. A dual-head policy is developed, with one head dedicated to maneuvering decisions and the other to fire control. Alongside the reinforcement learning memory that stores state transitions, supervised learning memory is introduced to store pairs of states where missiles are launched and the hit outcomes. The maneuvering head is trained with the soft policy gradient, while the fire control head is optimized with the binary cross-entropy loss. This method enables agents to make maneuvering decisions and predict missile hit probabilities simultaneously without prior expert knowledge. Simulation demonstrates the superiority of the proposed method.
External IDs:doi:10.1109/lra.2025.3641098
Loading