Enhancing Rolling Horizon Evolution with Policy and Value Networks

Published: 2019, Last Modified: 08 Oct 2025CoG 2019EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Rolling Horizon Evolutionary Algorithm (RHEA) is an online planning method for real-time game playing; its performance is closely related to the planning horizon and the search cost allowed. In this paper, we propose to learn a prior for RHEA in an offline manner by training a value network and a policy network. The value network is used to reduce the planning horizon by providing an estimation of future rewards, and the policy network is used to initialize the population, which helps to narrow down the search scope. The proposed algorithm, named prior-based RHEA (p-RHEA), trains policy and value networks by performing planning and learning iteratively. In the planning stage, the horizon-limited search is performed to improve the policies and collect training samples with the help of the learned networks. In the learning stage, the policy network and value network are trained with the collected samples to learn better prior knowledge. Experimental results on OpenAI MuJoCo tasks show that the performance of the proposed p- RHEA is significantly improved compared to that of RHEA.
Loading