Investigating Human Priors for Playing Video Games

Anonymous

Nov 03, 2017 (modified: Nov 03, 2017) ICLR 2018 Conference Blind Submission readers: everyone Show Bibtex
  • Abstract: Deep reinforcement learning algorithms have recently achieved impressive performance in playing video games. However, they require orders of magnitude more time than average human players to achieve the same performance. What makes humans so good at solving and figuring out these seemingly complex games? Here, we study one aspect critical to human decision making and problem solving – their use of strong priors (either learned or inbuilt), that helps them to generalize and solve tasks faster (as opposed to learning from scratch). Through systematic investigation of human performance in video games, we develop a taxonomy of different forms of prior knowledge employed by humans that enables them to quickly solve video games. While common wisdom might suggest that prior knowledge about game semantics such as ladders are to be climbed, jumping on spikes is dangerous or the agent must fetch the key before reaching the door are crucial to human performance, we find that instead more general and high-level priors such as the world is composed of objects, object like entities are used as subgoals for exploration, and things that look the same, act the same are more critical. We hope that our findings will inspire the reinforcement learning community to make use of more structured representations for building more efficient and possibly human-like agents.
  • TL;DR: We investigate the various kinds of prior knowledge that help human learning and find that general priors about objects play the most critical role in guiding human gameplay.
  • Keywords: Prior knowledge, Reinforcement learning, Cognitive Science

Loading