Is feedback all you need? Leveraging natural language feedback in goal-conditioned RL

Published: 03 Nov 2023, Last Modified: 27 Nov 2023GCRL WorkshopEveryoneRevisionsBibTeX
Confirmation: I have read and confirm that at least one author will be attending the workshop in person if the submission is accepted
Keywords: offline reinforcement learning, goal-conditioned reinforcement learning, learning from feedback, decision transformer
Abstract: Despite numerous successes, the field of reinforcement learning (RL) remains far from matching the impressive generalisation power of human behaviour learning. One way to help bridge this gap may be to provide RL agents with richer, more human-like feedback expressed in natural language. First, we extend BabyAI to automatically generate language feedback from the environment dynamics and goal condition success. Then, we modify the Decision Transformer architecture to take advantage of this additional signal. We find that training with language feedback either in place of or in addition to the return-to-go or goal descriptions improves agents’ generalisation performance, and that agents can benefit from feedback even when this is only available during training, but not at inference.
Supplementary Material: zip
Submission Number: 25
Loading