Evaluating Hierarchical Medical Workflows using Feature Importance

Published: 01 Jan 2021, Last Modified: 13 May 2025CBMS 2021EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The applicability and utility of Artificial Intelligence (AI) based solutions has been demonstrated widely in the healthcare domain via the automated analysis of medical information. However, the adoption rate of AI-based healthcare systems is inhibited due to their complicated nature. Also, the hierarchical nature of the medical settings adds a layer of complexity in understanding how a Machine Learning (ML) model functions when subjected to varying workflows. Variation in models' performance, and individual features' contribution, needs to be effectively quantified such that medical practitioners can understand the models, and validate their operation if widespread adoption is to be enabled. In this paper, a hierarchical medical workflow for understanding the operation of ML in a healthcare-based setting is proposed. Its utility is demonstrated in the context of heart disease classification. Explainable Artificial Intelligence (XAI) is incorporated in the form of Feature Importance (FI) scores and correlated with an ML model's performance metrics (Accuracy, F1-score). This provides a multi-stakeholder perspective aligned with the hierarchy as experienced in a real-world medical setting. The paper contributes a methodology for accommodating an enhanced understanding of diverse hierarchical healthcare settings that would benefit from the adoption of AI-based systems.
Loading