A Rule-based Evaluation Method of Local Explainers for Predictive Process Monitoring

Published: 01 Jan 2023, Last Modified: 15 May 2025ICDM (Workshops) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: One of the main challenges in using machine learning (ML) models is to ensure the interpretability of their predictions. Addressing this challenge becomes increasingly important as ML models are adopted in business scenarios and applications. One approach to explaining predictions is by the application of intrinsically interpretable methods, such as rule-based explainers. However, when missing crucial interpretability criteria, an explaining rule turns out to be complex to understand for any user. Consequently, a complex explanation might leave the user doubtful about the predictions generated by an ML model for unforeseen new data. Certain criteria should be satisfied by an interpretable explanation before being introduced to human subjects. Nonetheless, a few research studies are concerned with providing quantitative evaluation of rule-based local explanations in the context of predictive process monitoring use cases. In this paper, we propose a set of measurements that quantify the interpretability and completeness of explanation rules. These measurements are based on characteristics and criteria that are advisable to be satisfied by a rule to be considered interpretable. The approach is evaluated using real-life event logs in order to compare the explanations of two local rule-based explainers, i.e., LORE and Anchor. Our experiments show that LORE generates more concise and more interpretable rules compared to Anchor. The proposed approach can be extended to include more local rule-based explainers for further evaluation.
Loading