AlphaZeroES: Direct Score Maximization Can Outperform Planning Loss Minimization in Single-Agent Settings
Keywords: reinforcement learning, planning, Monte Carlo Tree Search (MCTS)
Abstract: Planning at execution time has been shown to dramatically improve performance for AI agents.
A well-known family of approaches to planning at execution time in single-agent settings and two-player zero-sum games are AlphaZero and its variants, which use Monte Carlo tree search together with a neural network that guides the search by predicting state values and action probabilities.
AlphaZero trains these networks by minimizing a planning loss that makes the value prediction match the episode return, and the policy prediction at the root of the search tree match the output of the full tree expansion.
AlphaZero has been applied to various single-agent environments that require careful planning, with great success.
In this paper, we explore an intriguing question:
in single-agent settings, can we outperform AlphaZero by directly maximizing the episode score instead of minimizing this planning loss, while leaving the MCTS algorithm and neural architecture unchanged?
To directly maximize the episode score, we use evolution strategies, a family of algorithms for zeroth-order blackbox optimization.
We compare both approaches across multiple single-agent environments.
Our experiments suggest that directly maximizing the episode score tends to outperform minimizing the planning loss.
Primary Area: reinforcement learning
Submission Number: 22463
Loading