Unraveling Road Accident Risk Prediction Models With eXplainable AI

Nishtha Srivastava, Bhavesh N. Gohil, Suprio Ray

Published: 01 Jan 2025, Last Modified: 22 Jan 2026IEEE Transactions on Computational Social SystemsEveryoneRevisionsCC BY-SA 4.0
Abstract: Road traffic accidents pose a critical global challenge, leading to substantial fatalities and socioeconomic repercussions. Previous studies have examined intrinsic factors (e.g., driver age, vehicle speed) and extrinsic factors (e.g., weather, road conditions) that influence accident severity. However, the opacity of existing AI and machine learning (ML) prediction models hinders interpretability and limits their adoption in real-world applications. To address this gap, an explainable ML framework for accident severity prediction is proposed. Using diverse accident datasets, the predictive performance of multiple ML models is systematically evaluated to ensure adaptability across different geographic contexts. To enhance interpretability, explainable AI (XAI) techniques such as SHapley Additive exPlanations (SHAP), GeoShapely, and local interpretable model-agnostic explanations (LIME) are integrated to analyze key factors influencing accident severity. Additionally, an XAI evaluation framework is developed using rank consistency, rank alignment, importance stability, and fluctuation ratio to quantify interpretability. Our results indicate that refining ML models using top-ranked features from XAI improves prediction accuracy by 17.04%, 3.66%, and 5.06% for the three road accident datasets that we evaluated, namely the U.S., Ethiopia, and U.K. datasets, respectively. We also provide a decision flowchart to assist urban planners in choosing a suitable XAI approach for road accident severity prediction.
Loading