Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in HexDownload PDF

Published: 31 Oct 2022, Last Modified: 16 Oct 2022NeurIPS 2022 AcceptReaders: Everyone
Keywords: alphazero, deep reinforcement learning, explainability, interpretability, evaluation, concepts, hex, mcts
Abstract: AlphaZero, an approach to reinforcement learning that couples neural networks and Monte Carlo tree search (MCTS), has produced state-of-the-art strategies for traditional board games like chess, Go, shogi, and Hex. While researchers and game commentators have suggested that AlphaZero uses concepts that humans consider important, it is unclear how these concepts are captured in the network. We investigate AlphaZero's internal representations in the game of Hex using two evaluation techniques from natural language processing (NLP): model probing and behavioral tests. In doing so, we introduce several new evaluation tools to the RL community, and illustrate how evaluations other than task performance can be used to provide a more complete picture of a model's strengths and weaknesses. Our analyses in the game of Hex reveal interesting patterns and generate some testable hypotheses about how such models learn in general. For example, we find that the MCTS discovers concepts before the neural network learns to encode them. We also find that concepts related to short-term end-game planning are best encoded in the final layers of the model, whereas concepts related to long-term planning are encoded in the middle layers of the model.
TL;DR: We introduce new concept-level evaluation tools to the RL community, and illustrate how evaluations other than task performance can be used to provide a more complete picture of a model’s strengths and weaknesses using AlphaZero and the game of Hex.
Supplementary Material: pdf
19 Replies

Loading