Means to an End: Assessing the Validity of Trust Scales in Artificial Intelligence Vignettes

Published: 01 Jan 2025, Last Modified: 01 Aug 2025HCI (54) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The development of artificial intelligence (AI) is leading to an increasing number of human-AI interactions. Since a human’s trust in technology can influence reliance on that technology, trust may affect the outcomes of human-AI interaction. The goal of our study was to assess the validity of three existing scales for measuring trust in automation when administered in AI contexts: the Trust between People and Automation scale (TPA), Multi-Dimensional Measure of Trust (MDMT), and Trust in Automation (TiA) scale. We recruited 292 U.S. federal employees to participate in an online experiment with three independent variables: trust scale, accuracy information, and vignette. Participants responded to their randomly assigned trust scale after each of up to 4 vignettes describing hypothetical or personal use of an AI system. Accuracy information was manipulated in the hypothetical use vignettes. We found that the three scales’ relationships with trust predictors and outcomes were mostly in line with trust theory. Our findings suggest using the two-item TiA if a general assessment of trust is needed and the MDMT if a more detailed trust assessment is needed. Overall, future research is needed for the ongoing exploration of trust scale validity and to better understand how to measure trust in human-AI interaction.
Loading