Reinforcement Learning with Lookahead Information

Published: 17 Jun 2024, Last Modified: 05 Jul 2024FoRLaC PosterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study reinforcement learning (RL) problems in which agents observe the reward or transition realizations at their current state _before deciding which action to take_. Such observations are available in many applications, including transactions, navigation and more. When the environment is known, previous work shows that this lookahead information can drastically increase the collected reward. However, outside of specific applications, existing approaches for interacting with unknown environments are not well-adapted to these observations. In this work, we close this gap and design provably-efficient learning algorithms able to incorporate lookahead information. To achieve this, we perform planning using the empirical distribution of the reward and transition observations, in contrast to vanilla approaches that only rely on estimated expectations. We prove that our algorithms achieve tight regret versus a baseline that also has access to lookahead information -- linearly increasing the amount of collected reward compared to agents that cannot handle lookahead information.
Format: Long format (up to 8 pages + refs, appendix)
Publication Status: No
Submission Number: 9
Loading