## Learning Representations that Enable Generalization in Assistive Tasks

16 Jun 2022, 10:45 (modified: 03 Dec 2022, 00:18)CoRL 2022 PosterReaders: Everyone
Student First Author: yes
Keywords: assistive robots, representation learning, OOD generalization
Abstract: Recent work in sim2real has successfully enabled robots to act in physical environments by training in simulation with a diverse population'' of environments (i.e. domain randomization). In this work, we focus on enabling generalization in \emph{assistive tasks}: tasks in which the robot is acting to assist a user (e.g. helping someone with motor impairments with bathing or with scratching an itch). Such tasks are particularly interesting relative to prior sim2real successes because the environment now contains a \emph{human who is also acting}. This complicates the problem because the diversity of human users (instead of merely physical environment parameters) is more difficult to capture in a population, thus increasing the likelihood of encountering out-of-distribution (OOD) human policies at test time. We advocate that generalization to such OOD policies benefits from (1) learning a good latent representation for human policies that test-time humans can accurately be mapped to, and (2) making that representation adaptable with test-time interaction data, instead of relying on it to perfectly capture the space of human policies based on the simulated population only. We study how to best learn such a representation by evaluating on purposefully constructed OOD test policies. We find that sim2real methods that encode environment (or population) parameters and work well in tasks that robots do in isolation, do not work well in \emph{assistance}. In assistance, it seems crucial to train the representation based on the \emph{history of interaction} directly, because that is what the robot will have access to at test time. Further, training these representations to then \emph{predict human actions} not only gives them better structure, but also enables them to be fine-tuned at test-time, when the robot observes the partner act.
Supplementary Material: zip