The challenges of first- and second-order belief reasoning in explainable human-robot interactionDownload PDF

Published: 09 May 2023, Last Modified: 07 Jun 2023ICRA2023 XRo OralReaders: Everyone
Keywords: Mind attribution, Explainability, Folk psychology, Social Cognition, False-belief task
TL;DR: Our recent findings suggest that people attribute second-order beliefs to social robots and the dominantly used RL models to address XAR are argued to be incapable to appropriately address second-order belief attribution errors.
Abstract: Current approaches to implement eXplainable Autonomous Robots (XAR) are dominantly based on Reinforcement Learning (RL), which are suitable for modelling and correcting people’s first-order mental state attributions to robots. Our recent findings show that people also rely on attributing second-order beliefs (i.e., beliefs about beliefs) to robots to interpret their behavior. However, robots arguably form and act primarily on first-order beliefs and desires (about things in the environment) and do not have a functional “theory of mind”. Moreover, RL models may be incapable to appropriately address second-order belief attribution errors. This paper aims to open a discussion of what our recent findings on second-order mental state attribution to robots imply for current approaches to XAR.
0 Replies

Loading