Abstract: As AI systems are increasingly transforming our society, it is critical to support relevant stakeholders to have appropriate understanding and trust in these systems. My dissertation research explores how AI transparency and explainability can help with this goal. I begin with human-centered evaluations of current AI explanation techniques, focusing on their usefulness for people in understanding model behavior and calibrating trust. Next, I identify what explainability needs real AI end-users have and what factors influence their trust through an in-depth case study of a real-world AI application. Finally, I describe two studies, one ongoing and one proposed, that investigate transparency and explainability approaches for Generative AI, such as large language models, to enable safe and successful interactions with this new and powerful technology. My dissertation contributes to both HCI and AI fields by elucidating mechanisms and factors of trust in AI and detailing design considerations for AI transparency and explainability approaches.
Loading