Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents
Abstract: The Arcade Learning Environment (ALE) is an evaluation platform that poses the
challenge of building AI agents with general competency across dozens of Atari 2600 games.
It supports a variety of different problem settings and it has been receiving increasing
attention from the scientific community, leading to some high-profile success stories such as
the much publicized Deep Q-Networks (DQN). In this article we take a big picture look at
how the ALE is being used by the research community. We show how diverse the evaluation
methodologies in the ALE have become with time, and highlight some key concerns when
evaluating agents in the ALE. We use this discussion to present some methodological best
practices and provide new benchmark results using these best practices. To further the
progress in the field, we introduce a new version of the ALE that supports multiple game
modes and provides a form of stochasticity we call sticky actions. We conclude this big
picture look by revisiting challenges posed when the ALE was introduced, summarizing the
state-of-the-art in various problems and highlighting problems that remain open.
0 Replies
Loading