Adversarial Policies Beat Professional-Level Go AIsDownload PDF

08 Oct 2022 (modified: 22 Oct 2023)Deep RL Workshop 2022Readers: Everyone
Keywords: adversarial policies, mcts, go
TL;DR: We successfully find an end-to-end attack against a Go AI (KataGo) playing at the level of a top human professional.
Abstract: We attack the state-of-the-art Go-playing AI system, KataGo, by training an adversarial policy that plays against a frozen KataGo victim. Our attack achieves a >99% win-rate against KataGo without search, and a >80% win-rate when KataGo uses enough search to be near-superhuman. To the best of our knowledge, this is the first successful end-to-end attack against a Go AI playing at the level of a top human professional. Notably, the adversary does not win by learning to play Go better than KataGo---in fact, the adversary is easily beaten by human amateurs. Instead, the adversary wins by tricking KataGo into ending the game prematurely at a point that is favorable to the adversary. Our results demonstrate that even professional-level AI systems may harbor surprising failure modes.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:2211.00241/code)
0 Replies

Loading