World Model Agents with Change-Based Intrinsic Motivation

Published: 06 Nov 2024, Last Modified: 06 Jan 2025NLDL 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: reinforcement learning, intrinsic rewards, transfer learning, sparse reward environments
TL;DR: We examine the impact of CBET on DreamerV3 using IMPALA as a baseline in the sparse reward environments of Minigrid and Crafter
Abstract: Sparse reward environments pose a significant challenge for reinforcement learning due to the scarcity of feedback. Intrinsic motivation and transfer learning have emerged as promising strategies to address this issue. Change Based Exploration Transfer (CBET), a technique that combines these two approaches for model-free algorithms, has shown potential in addressing sparse feedback but its effectiveness with modern algorithms remains understudied. This paper provides an adaptation of CBET for world model algorithms like DreamerV3 and compares the performance of DreamerV3 and IMPALA agents, both with and without CBET, in the sparse reward environments of Crafter and Minigrid. Our tabula rasa results highlight the possibility of CBET improving DreamerV3's returns in Crafter but the algorithm attains a suboptimal policy in Minigrid with CBET further reducing returns. In the same vein, our transfer learning experiments show that pre-training DreamerV3 with intrinsic rewards does not immediately lead to a policy that maximizes extrinsic rewards in Minigrid. Overall, our results suggest that CBET provides a positive impact on DreamerV3 in more complex environments like Crafter but may be detrimental in environments like Minigrid. In the latter case, the behaviours promoted by CBET in DreamerV3 may not align with the task objectives of the environment, leading to reduced returns and suboptimal policies.
Git: https://github.com/Jazhyc/world-model-policy-transfer
Submission Number: 38
Loading