Keywords: Reinforcement Learning, Interpretable RL, Neurosymbolic RL, Causal RL
TL;DR: We propose HackAtari, a framework that allow for testing the ability of RL agents to adapt to color, game mechanics, and goal changes.
Abstract: Artificial agents' adaptability to novelty and alignment with intended behavior are crucial for their effective deployment. Reinforcement learning (RL) leverages novelty as a means of exploration, yet agents often struggle to handle novel situations during deployment, hindering generalization.
Addressing this, we propose HackAtari, a framework introducing controlled novelty to the most common RL benchmark, the Atari Learning Environment.
HackAtari enables us to create of novel game scenarios (including simplification for curriculum learning), to swap the game elements' colors, as well as to introduce different reward signal for the agent.
We demonstrate that current agents trained on the original environments include robustness failures, and evaluate HackAtari's efficacy in enhancing RL agents' robustness and aligning behavior through experiments using DQN, C51, and PPO.
Overall, HackAtari can be used to improve the robustness of current and future RL algorithms, allowing Neurso-Symbolic RL, curriculum RL, causal RL, as well as LLM driven RL.
Our work underscores the significance of developing interpretable in RL agents.
Submission Number: 15
Loading