Learning Reward for Physical Skills using Large Language Model

Published: 21 Oct 2023, Last Modified: 04 Nov 2023LangRob @ CoRL 2023 PosterEveryoneRevisionsBibTeX
Keywords: Reward learning, Physical skills, Large Language Models
TL;DR: We introduce a self-alignment method for learning the reward function of physical skills from LLM.
Abstract: Learning reward functions for physical skills are challenging due to the vast spectrum of skills, the high-dimensionality of state and action space, and nuanced sensory feedback. The complexity of these tasks makes acquiring expert demonstration data both costly and time-consuming. Large Language Models (LLMs) contain valuable task-related knowledge that can aid in learning these reward functions. However, the direct application of LLMs for proposing reward functions has its limitations such as numerical instability and inability to incorporate the environment feedback. We aim to extract task knowledge from LLMs using environment feedback to create efficient reward functions for physical skills. Our approach consists of two components. We first use the LLM to propose features and parameterization of the reward function. Next, we update the parameters of this proposed reward function through an iterative self-alignment process. In particular, this process minimizes the ranking inconsistency between the LLM and our learned reward functions based on the new observations. We validated our method by testing it on three simulated physical skill learning tasks, demonstrating effective support for our design choices.
Submission Number: 18
Loading