Imitation Learning Based on Disentangled Representation Learning of Behavioral Characteristics

Published: 08 Aug 2025, Last Modified: 16 Sept 2025CoRL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Imitation learning, Disentangled representation learning
TL;DR: We propose a motion generation model that adapts to human modifier directives in real time during task execution.
Abstract: In the field of robot learning, it is becoming possible to coordinate robot action through language instructions. On the other hand, it is still a difficult task to adjust the action based on human instructions because human instructions are often qualitative, and there are cases where there is no one-to-one correspondence between the behavior and the instructions. In this paper, we propose a motion generation model that can adjust actions in response to qualitative human instructions during task execution. The core of the proposed method is a learning architecture that maps qualitative human instructions to actions. Specifically, the demonstration is divided into short action sequences, and labels reflecting human qualitative senses are assigned to these sequences to realize learning that links human qualitative instructions and robot actions. In evaluation experiments, we verified the effectiveness of the method in two tasks: a pick-and-place task and a wiping task. Experimental results showed that the proposed method is able to generate motions in response to human qualitative instructions during task execution, whereas the conventional method generates trajectories all at once, making it impossible to adjust motions during task execution.
Supplementary Material: zip
Spotlight: mp4
Submission Number: 1026
Loading