Trust and AI in IT Management Decision-Making: A Systematic Review and Framework for Balancing Autonomy with Human Oversight
Keywords: Artificial Intelligence, AI trust, IT management, decision-making, human oversight, organizational governance, explainability, autonomy, trust calibration, human–AI interaction, accountability, AI governance frameworks, systematic review, trustworthiness, AI adoption
TL;DR: This paper systematically reviews trust in AI for IT management and proposes a framework to balance AI autonomy with human oversight.
Abstract: Artificial intelligence (AI) is increasingly embedded in IT management decision-making, from budgeting and workforce allocation to vendor selection and cybersecurity oversight. Yet, trust remains a central barrier to adoption: IT managers hesitate to rely on AI tools when transparency, oversight, and governance are unclear. This study conducts a systematic review of 21 peer-reviewed studies, industry reports, and regulatory frameworks (2019--2025) to examine how trust in AI is shaped within IT management contexts. We develop a taxonomy of trust factors across technical, organizational, and human--AI interaction domains, and synthesize oversight mechanisms ranging from human-in-the-loop designs to governance boards and regulatory compliance. Building on these insights, we propose the \textbf{AI Trust--Oversight Balance Framework}, a 2x2 matrix that aligns AI autonomy with organizational trust maturity and offers guidance for oversight strategies. Findings highlight the dynamic, multi-level nature of trust: it requires continuous calibration, organizational embedding, and regulatory reinforcement. We conclude by identifying key research gaps---particularly IT-specific empirical studies, longitudinal analyses, cross-cultural comparisons, and standardized measurement tools---and outline a forward-looking agenda to advance trustworthy AI adoption in IT management.
Submission Number: 235
Loading