Abstract: In Reinforcement Learning (RL), agents have no incentive to exhibit predictable trajectories, and are often pushed (through e.g. policy entropy regularisation) to randomise their actions in favor of exploration. This lack of predictability awareness often makes it challenging for other agents and humans to predict an agent's trajectories, possibly triggering unsafe scenarios (e.g. in human-robot interaction). We propose a novel method to induce predictable trajectories in RL agents, termed Predictability-Aware RL (PARL), employing the agent's trajectory entropy rate to quantify predictability. Our method maximizes a linear combination of a standard discounted reward and the negative entropy rate, thus trading off optimality with predictability. We show how the entropy rate can be formally cast as an average reward, how entropy-rate value functions can be estimated from a learned model and incorporate this in policy-gradient algorithms, and demonstrate how this approach produces predictable (near-optimal) policies in tasks inspired by human-robot use-cases.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Mirco_Mutti1
Submission Number: 4011
Loading