Human-AI Interactive and Continuous Sensemaking: A Case Study of Image Classification using Scribble Attention MapsOpen Website

2021 (modified: 04 Nov 2022)CHI Extended Abstracts 2021Readers: Everyone
Abstract: Advances in Artificial Intelligence (AI), especially the stunning achievements of Deep Learning (DL) in recent years, have shown AI/DL models possess remarkable understanding towards the logic reasoning behind the solved tasks. However, human understanding towards what knowledge is captured by deep neural networks is still elementary and this has a detrimental effect on human’s trust in the decisions made by AI systems. Explainable AI (XAI) is a hot topic in both AI and HCI communities in order to open up the blackbox to elucidate the reasoning processes of AI algorithms in such a way that makes sense to humans. However, XAI is only half of human-AI interaction and research on the other half - human’s feedback on AI explanations together with AI making sense of the feedback - is generally lacking. Human cognition is also a blackbox to AI and effective human-AI interaction requires unveiling both blackboxes to each other for mutual sensemaking. The main contribution of this paper is a conceptual framework for supporting effective human-AI interaction, referred to as interactive and continuous sensemaking (HAICS). We further implement this framework in an image classification application using deep Convolutional Neural Network (CNN) classifiers as a browser-based tool that displays network attention maps to the human for explainability and collects human’s feedback in the form of scribble annotations overlaid onto the maps. Experimental results using a real-world dataset has shown significant improvement of classification accuracy (the AI performance) with the HAICS framework.
0 Replies

Loading