Playing SNES in the Retro Learning EnvironmentDownload PDF

28 Mar 2024 (modified: 22 Oct 2023)Submitted to ICLR 2017Readers: Everyone
TL;DR: Investigating Deep Reinforcement Learning algorithms in a new framework based on the SNES gaming console
Abstract: Mastering a video game requires skill, tactics and strategy. While these attributes may be acquired naturally by human players, teaching them to a computer program is a far more challenging task. In recent years, extensive research was carried out in the field of reinforcement learning and numerous algorithms were introduced, aiming to learn how to perform human tasks such as playing video games. As a result, the Arcade Learning Environment (ALE) has become a commonly used benchmark environment allowing algorithms to trainon various Atari 2600 games. In many games the state-of-the-art algorithms out-perform humans. In this paper we introduce a new learning environment, the Retro Learning Environment — RLE, that can run games on the Super Nintendo Entertainment System (SNES), Sega Genesis and several other gaming consoles.The environment is expandable, allowing for more video games and consoles to be easily added to the environment, while maintaining a simple unified interface. Moreover, RLE is compatible with Python and Torch. SNES games pose a significant challenge to current algorithms due to their higher level of complexity and versatility. A more extensive paper describing our work is available on arXiv
Keywords: Deep learning, Reinforcement Learning, Games
Conflicts: ibm.com, technion.ac.il
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:1611.02205/code)
3 Replies

Loading