Equivariant MuZero

Published: 10 Mar 2023, Last Modified: 28 Apr 2023ICLR 2023 Workshop DG OralEveryoneRevisions
Keywords: reinforcement learning, model-based, symmetries, equivariance
TL;DR: We introduce Equivariant MuZero, a model-based agent that incorporates the symmetries in the environment through a specialised architecture.
Abstract: Deep reinforcement learning repeatedly succeeds in closed, well-defined domains such as games (Chess, Go, StarCraft). The next frontier is real-world scenarios, where setups are numerous and varied. For this, agents need to learn the underlying rules governing the environment, so as to robustly generalize to conditions that differ from those they were trained on. Model-based reinforcement learning algorithms, such as the highly successful MuZero, aim to accomplish this by learning a world model. However, leveraging a world model has not consistently shown greater generalization capabilities compared to model-free alternatives. In this work, we propose improving the data efficiency and generalization capabilities of MuZero by explicitly incorporating the symmetries of the environment in its world-model architecture. We prove that, so long as the neural networks used by MuZero are equivariant to a particular symmetry group acting on the environment, the entirety of MuZero's action-selection algorithm will also be equivariant to that group. We evaluate Equivariant MuZero on procedurally-generated MiniPacman and on Chaser from the ProcGen suite: training on a set of mazes, and then testing on unseen rotated versions, demonstrating the benefits of equivariance. Further, we verify that our performance improvements hold even when only some of the components of Equivariant MuZero obey strict equivariance, which highlights the robustness of our construction.
Submission Number: 24
Loading