Bayesian Theory of Mind for False Belief Understanding in Human-Robot Interaction

Mehdi Hellou, Samuele Vinanzi, Angelo Cangelosi

Published: 13 Nov 2023, Last Modified: 06 Nov 20252023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In order to achieve a widespread adoption of social robots in the near future, we need to design intelligent systems that are able to autonomously understand our beliefs and preferences. This will pave the foundation for a new generation of robots able to navigate the complexities of human societies. To reach this goal, we look into Theory of Mind (ToM): the cognitive ability to understand other agents’ mental states. In this paper, we rely on a probabilistic ToM model to detect when a human has false beliefs with the purpose of driving the decision-making process of a collaborative robot. In particular, we recreate an established psychology experiment involving the search for a toy that can be secretly displaced by a malicious individual. The results that we have obtained in simulated experiments show that the agent is able to predict human mental states and detect when false beliefs have arisen. We then explored the set-up in a real-world human interaction to assess the feasibility of such an experiment with a humanoid social robot.
Loading