Evidence Based Pipeline for Explaining Artificial Intelligence Algorithms with Interactions

Published: 01 Jan 2022, Last Modified: 12 Feb 2025DSAA 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Artificial intelligence (AI) enables machines to learn from human experience, adjust to new inputs, and perform intelligent tasks without human intervention. AI is progressing rapidly and is transforming the way businesses operate, from process automation to cognitive augmentation of tasks and intelligent process/data analytics. However, the main challenge for the AI system users is to comprehend and trust the result of AI algorithms and methods. To address this challenge, we first study the recent techniques in the area of eXplainable Artificial Intelligence (XAI). Then, we introduce a novel XAI process to facilitate producing explainable models while maintaining a high level of learning performance. We present an interactive evidence-based approach to assist the users in comprehending and trusting the results and outputs generated by AI-enabled algorithms, resulting in developing a digital dashboard to facilitate inter-acting with the algorithm. Lastly, we discuss how the proposed XAI method can significantly improve the confidence of data scientists in understanding the result of AI-enabled algorithms with an application in the banking domain for analyzing customer transactions.
Loading