Human-AI Collaboration via Trust Factors: A Collaborative Game Use Case

Andrea Fanti, Francesco Frattolillo, Rosapia Laudati, Fabio Patrizi, Luca Iocchi

Published: 22 Sept 2025, Last Modified: 27 Jan 2026CrossrefEveryoneRevisionsCC BY-SA 4.0
Abstract: Trust is a well-established and extensively studied concept in the humanitarian field, yet its highly subjective nature, dependence on a wide range of influencing factors, and the absence of a single, unified definition pose significant challenges when attempting to apply it to Human-AI teams. However, being able to assess and recognize when to trust an agent is crucial, especially when deciding whether to delegate tasks or to collaborate on more intricate tasks within a team. Existing work has studied the integration of specific computable factors, which are known to influence trust. In this work, we study how these methods transfer to a more concrete Human-AI collaborative puzzle game. In this setting, the AI agent employs a trust-aware Reinforcement Learning algorithm, while the Human agent’s behavior is emulated through the use of a Large Language Model. Our study considers the realistic condition of heterogeneous agents, each skilled in different areas, and we demonstrate that trust-awareness, particularly the ability to make justified decisions about task delegation, is essential for the successful operation of the team. This scenario serves as a testbed for evaluating methods of trust integration and their impact on Human-AI collaboration.
Loading