Abstract: General Game Playing (GGP), a research field aimed at developing agents that master different games in a unified way, is regarded as a necessary step towards creating artificial general intelligence. With the success of deep reinforcement learning (DRL) in games like Go, chess, and shogi, it has been recently introduced to GGP and is regarded as a promising technique to achieve the goal of GGP. However, the current work uses fully connected neural networks and is thus unable to efficiently exploit the topological structure of game states. In this paper, we propose an approach to applying general-purposed convolutional neural networks to GGP and implement a DRL-based GGP player. Experiments indicate that the built player not only outperforms the previous algorithm and UCT benchmark in a variety of games but also requires less training time.
Loading