Predicting Future Actions of Reinforcement Learning Agents

Published: 07 Aug 2024, Last Modified: 07 Aug 2024RLSW 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Confirmation: Yes
Keywords: Reinforcement learning, safe reinforcement learning, human-agent interaction, agent predictability
TL;DR: We evaluate two approaches to predict the future actions of agents trained by different types of reinforcement learning algorithms.
Abstract: As reinforcement learning agents become increasingly deployed in real-world scenarios, predicting future agent actions and events during deployment is important for facilitating better human-agent interaction and preventing catastrophic outcomes. This paper experimentally evaluates and compares the effectiveness of future action and event prediction for three types of RL agents: explicitly planning, implicitly planning, and non-planning. We employ two approaches: the inner state approach, which involves predicting based on the inner computations of the agents (e.g., plans or neuron activations), and a simulation-based approach, which involves unrolling the agent in a learned world model. Our results show that the plans of explicitly planning agents are significantly more informative for prediction than the neuron activations of the other types. Furthermore, using internal plans proves more robust to model quality compared to simulation-based approaches when predicting actions, while the results for event prediction are more mixed. These findings highlight the benefits of leveraging inner states and simulations to predict future agent actions and events, thereby improving interaction and safety in real-world deployments.
Submission Number: 4
Loading