Abstract: Research on trust repair in human-agent cooperation is limited, but it can mitigate trust violations and prevent distrust. We conducted a study with 45 participants who interacted with a companion agent through mixed-reality interfaces while playing a language-learning quiz game. Our results showed that displaying the agent's intentions during mistake explanations helped repair trust. We also found that this had positive effects on affect, user experience, and cooperation willingness. Our findings suggest that including agent intentions in mistake explanations is an effective way to repair trust in human-agent interactions.
External IDs:dblp:conf/vr/PeiHWYXS23
Loading