Abstract: Behavioral trust is to entrust resources to the trustee, expecting a high return while accepting the risk of betrayal. Previous studies have demonstrated bias on behavioral trust, that trust is not necessarily rationally to the expected values, and is inconsistent to human vs. non-human counterparts. People on the one hand seem to be aversive to intentional betrayal (Betrayal Aversion), but on the other hand, people avoid being dependent on algorithms rather than human (Algorithm Aversion). Yet, these aversions are not comprehensively investigated. The present study conducted a well-controlled behavioral game experiment that systematically explored the entrusting (risk-taking) behavior when facing a counterpart (human vs. AI) or a natural risk, and further explored the effect of the counterpart’s computational ability (intentional for human and algorithmic for AI). Participants (n=284) played a trust game with (a) a human with intentional decisions, (b) an AI with algorithmic decisions, (c) a human with random decisions, or (d) an AI with random decisions whose probability of return was known, as well as a lottery task structurally equivalent to the trust game. Entrusting decisions to different levels of probability of return were measured to compute the Minimum Acceptable Probability (MAP) as a quantitative measure of trust. The results showed that participants were more trusting of a counterpart than a lottery machine, yet this tendency did not differ by counterparts (i.e., human vs. AI counterparts), or between high computational ability or lower (i.e., intentional/algorithmic decision vs. random decisions). The results suggest an overtrust bias, rather than an aversive bias to a counterpart with agency, whether they are AIs or humans, regardless of their intentions.
External IDs:dblp:conf/ro-man/TakagiLKT24
Loading