Trust Calibration for Joint Human/AI Decision-Making in Dynamic and Uncertain Contexts

Published: 2025, Last Modified: 26 Jan 2026HCI (54) 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Joint human/AI decision-making combines AI’s ability to quickly process breathtaking amounts of data with human contextual understanding, adaptability and accountability. To achieve optimal performance, the human should have appropriately calibrated trust in the system, in which the amount of trust afforded to the system aligns with the trustworthiness of the system. Past work has explored several techniques to improve trust calibration, including transparency, explainability, and uncertainty visualization. Achieving trust calibration becomes even more difficult when the trustworthiness of the system is a moving target. In dynamic situations, the trustworthiness of AI systems can fluctuate wildly, demanding rapid updates to trust behaviors to achieve calibration. Accurate confidence or uncertainty measures have been proposed to help humans rapidly calibrate their trust in AI systems; however, this requires that accurate confidence measures exist and that humans can use them effectively. In this position paper, we join recent calls for research to improve confidence measures in AI systems, and we further emphasize the need to track and convey multidimensional confidence measures in the context of large, complex system-of-systems architectures. We discuss how these measures aid in establishing calibrated trust for AI systems even in the presence of uncertainty of information. Further, we highlight the opportunities for improved design in user interfaces that convey AI confidence to human users and for better preparing humans to optimally weight AI inputs against other sources of information, including their own judgment, to arrive at better results when making decisions under uncertainty in dynamic, complex environments.
Loading