Keywords: reinforcement learning, unsupervised reinforcement learning, open-ended learning, foundation model, large language model
TL;DR: We propose a framework of leveraging foundation models as teachers for guiding a reinforcement learning agent to acquire semantically meaningful behavior without human intervention.
Abstract: We propose a framework that leverages foundation models as teachers, guiding a reinforcement learning agent to acquire semantically meaningful behavior without human intervention.
In our framework, the agent receives task instructions grounded in a training environment from large language models.
Then, a vision-language model guides the agent in learning the tasks by providing reward feedback.
We demonstrate that our method can learn semantically meaningful skills in a challenging open-ended MineDojo environment, while prior works on unsupervised skill discovery methods struggle.
Additionally, we discuss the observed challenges of using off-the-shelf foundation models as teachers and our efforts to address them.
Submission Number: 55
Loading