Optimistic Temporal Difference Learning for 2048Download PDFOpen Website

2022 (modified: 24 Apr 2023)IEEE Trans. Games 2022Readers: Everyone
Abstract: Temporal difference (TD) learning and its variants, such as multistage TD learning and temporal coherence (TC) learning, have been successfully applied to <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2048</i> . These methods rely on the stochasticity of the environment of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2048</i> for exploration. In this article, we propose to employ optimistic initialization (OI) to encourage exploration for <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">2048</i> , and empirically show that the learning quality is significantly improved. This approach optimistically initializes the feature weights to very large values. Since weights tend to be reduced once the states are visited, agents tend to explore those states which are unvisited or visited few times. Our experiments show that both TD and TC learning with OI significantly improve the performance. As a result, the network size required to achieve the same performance is significantly reduced. With additional tunings such as expectimax search, multistage learning, and tile-downgrading technique, our design achieves the state-of-the-art performance, namely an average score of 625 377 and a rate of 72% reaching 32 768-tiles. In addition, for sufficiently large tests, 65 536-tiles are reached at a rate of 0.02%.
0 Replies

Loading