Chapter One - Explainable artificial intelligence for enhanced living environments: A study on user perspective

Published: 01 Jan 2024, Last Modified: 08 Sept 2025Adv. Comput. 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Enhanced Living Environment (ELE) applications utilize the expressive power of Artificial Intelligence (AI) to provide enhanced performance. However, despite the impressive performance, state-of-the-art AI techniques come at the cost of explainability; users cannot understand the rationale of the decisions made by such systems. This opaque nature causes a lack of trust leading to less adoption of ELE systems, especially in mission-critical domains such as healthcare. The AI community has proposed various eXplainable AI (XAI) techniques to rationalize AI decisions. However, the user perspective of XAI is not well explored. This chapter addresses user perception and requirements of XAI and attitude toward adopting explainable ELE systems. A user study with 326 participants revealed that most perceive XAI as essential and expect explanations to be easy to understand, faithful, and interactive. The respondents prefer concept-based explanations over feature attributions. Hence, we develop a novel approach to generate multimodal explanations consisting of linguistic and visual explanations to rationalize the decisions made by Human Activity Recognition (HAR) systems. Finally, we conduct a validation survey to evaluate the impact of introducing explanations into a HAR system. The results highlight that users' trust grows with explainability leading to higher adoption of ELE systems.
Loading