Abstract: Business process prediction employs event logs to train models for forecasting process states. While deep learning enhances prediction performance, most prediction models remain black boxes with limited ability to explain why a certain business process prediction was made. The lack of model interpretability undermines prediction reliability and reduces decision-makers’ adoption rates. In this paper, we propose an interpretable Transformer-based process prediction model that delivers predictions with explanations, supported by quantitative and qualitative evaluation of interpretation reliability. We analyze how events and attributes independently influence subsequent activity predictions and how individual event and attribute influence mutually to explain model decisions. Experimental results show our explainable prediction model enhances interpretation reliability with high faithfulness and trustworthiness.
External IDs:dblp:journals/kais/WuHL25
Loading