Solving the Rubik's Cube with Approximate Policy IterationDownload PDF

Sep 27, 2018 (edited Apr 30, 2019)ICLR 2019 Conference Blind SubmissionReaders: Everyone
  • Abstract: Recently, Approximate Policy Iteration (API) algorithms have achieved super-human proficiency in two-player zero-sum games such as Go, Chess, and Shogi without human data. These API algorithms iterate between two policies: a slow policy (tree search), and a fast policy (a neural network). In these two-player games, a reward is always received at the end of the game. However, the Rubik’s Cube has only a single solved state, and episodes are not guaranteed to terminate. This poses a major problem for these API algorithms since they rely on the reward received at the end of the game. We introduce Autodidactic Iteration: an API algorithm that overcomes the problem of sparse rewards by training on a distribution of states that allows the reward to propagate from the goal state to states farther away. Autodidactic Iteration is able to learn how to solve the Rubik’s Cube and the 15-puzzle without relying on human data. Our algorithm is able to solve 100% of randomly scrambled cubes while achieving a median solve length of 30 moves — less than or equal to solvers that employ human domain knowledge.
  • Keywords: reinforcement learning, Rubik's Cube, approximate policy iteration, deep learning, deep reinforcement learning
  • TL;DR: We solve the Rubik's Cube with pure reinforcement learning
8 Replies