Language-Conditioned Reinforcement Learning to Solve Misunderstandings with Action CorrectionsDownload PDF

Published: 21 Oct 2022, Last Modified: 17 Nov 2024LaReL 2022Readers: Everyone
Keywords: reinforcement learning, instruction-following, action correction, misunderstanding, ambiguity, negation, language-conditioned
TL;DR: This paper formalizes and experimentally validates the use of incremental action correction for robotic instruction-following based on language-conditioned reinforcement learning.
Abstract: Human-to-human conversation is not just talking and listening. It is an incremental process where participants continually establish a common understanding to rule out misunderstandings. Current language understanding methods for intelligent robots do not consider this. There exist numerous approaches considering non-understandings, but they ignore the incremental process of resolving misunderstandings. In this article, we present a first formalization and experimental validation of incremental action-repair for robotic instruction-following based on reinforcement learning. To evaluate our approach, we propose a collection of benchmark environments for action correction in language-conditioned reinforcement learning, utilizing a synthetic instructor to generate language goals and their corresponding corrections. We show that a reinforcement learning agent can successfully learn to understand incremental corrections of misunderstood instructions.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/language-conditioned-reinforcement-learning/code)
3 Replies

Loading