Spatial Generalization of Visual Imitation Learning with Position-Invariant RegularizationDownload PDF

Published: 24 Jun 2023, Last Modified: 01 Jul 2023RSS 2023 Workshop SymmetryReaders: Everyone
Abstract: How the visual imitation learning models can generalize to novel unseen visual observations is a highly challenging problem. Such a generalization ability is very crucial for their real-world applications. Since this generalization problem has many different aspects, we focus on one case called spatial generalization, which refers to generalization to unseen setup of object~(entity) locations in a task, such as a novel setup of object locations in the robotic manipulation problem. In this case, previous works observe that the visual imitation learning models will overfit to the absolute information (e.g., coordinates) rather than the relational information between objects, which is more important for decision making. As a result, the models will perform poorly in novel object location setups. Nevertheless, so far, it remains unclear how we can solve this problem effectively. Our insight into this problem is to explicitly remove the absolute information from the features learned by imitation learning models so that the models can use robust, relational information to make decisions. To this end, we propose a novel, position-invariant regularizer called POINT for generalization. The proposed regularizer will penalize the imitation learning model when its features contain absolute, positional information of objects. Various experiments demonstrate the effectiveness of our method.
3 Replies