The effects of robot learning on human teachers for learning from demonstration

Erin Hedlund-Botti, Julianna Schalkwyk, Michael J. Johnson, Matthew Gombolay

Published: 2025, Last Modified: 18 Mar 2026Auton. Robots 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning from Demonstration (LfD) algorithms seek to enable end-users to teach robots new skills through human demonstration of a task. Previous studies have analyzed how robot failure affects human trust, but not in the context of the human teaching the robot. In this paper, we investigate how human teachers react to robot failure in an LfD setting. We conduct a study in which participants teach a robot how to complete three tasks, using one of three instruction methods, while the robot is pre-programmed to either succeed or fail at the task. We find that when the robot fails, people trust the robot less (\(p<.001\)) and themselves less (\(p=.003\)) and they believe that others will trust them less (\(p<.001\)). Human teachers also have a lower impression of the robot and themselves (\(p<.001\)) and found the task more difficult when the robot fails (\(p<.001\)). Motion capture was found to be a less difficult instruction method than teleoperation (\(p=.012\)), while kinesthetic teaching gave the teachers the lowest impression of themselves compared to teleoperation (\(p=.035\)) and motion capture (\(p=.001\)). Importantly, a mediation analysis showed that people’s trust in themselves is heavily mediated by what they think that others—including the robot—think of them (\(p<.001\)). These results provide valuable insights to improving the human–robot relationship for LfD.
Loading