Exploring the Impact of Fault Justification in Human-Robot TrustOpen Website

Published: 01 Jan 2018, Last Modified: 06 Nov 2023AAMAS 2018Readers: Everyone
Abstract: With the growing interest on human-robot collaboration, the development of robotic partners that we can trust has to consider the impact of error situations. In particular, human-robot trust has been pointed as mainly affected by the performance of the robot and as such, we believe that in a collaborative setting, trust towards a robotic partner may be compromised after a faulty behaviour. This paper contributes to a user study exploring how a technical failure of an autonomous social robot affects trust during a collaborative scenario, where participants play the Tangram game in turns with the robot. More precisely, in a 2x2 (plus control) experiment we investigated 2 different recovery strategies, justify the failure or ignore the failure, after 2 different consequences of the failure, compromising or not the collaborative task. Overall, the results indicate that a faulty robot is perceived significantly less trustworthy. However, the recovery strategy of justifying the failure was able to mitigate the negative impact of the failure when the consequence was less severe. We also found an interaction effect between the two factors considered. These findings raise new implications for the development of reliable and trustworthy robots in human-robot collaboration.
0 Replies

Loading