Abstract: We propose a multiple-komi modification of the AlphaGo Zero/Leela Zero paradigm. The winrate as a function of the komi is modeled with a two-parameters sigmoid function, hence the winrate for all komi values is obtained, at the price of predicting just one more variable. A second novel feature is that training is based on self-play games that occasionaly branch -with changed komi- when the position is uneven. With this setting, reinforcement learning is shown to work on 7×7 Go, obtaining very strong playing agents. As a useful byproduct, the sigmoid parameters given by the network allow to estimate the score difference on the board, and to evaluate how much the game is decided. Finally, we introduce a family of agents which target winning moves with a higher score difference.
0 Replies
Loading