Abstract: With the blooming of Artificial Intelligence (AI), Machine Learning (ML) and Neural Networks are widely applied in various areas. Nevertheless, industrial applications take advantage of AI-based systems, and the users are more curious about how and why the decisions are being made by these approaches. Explainable AI (XAI), on the other hand, provides a transparent view, aiming at delivering interpretations according to the obtained information. In order to apply these methods in real-world applications, domain experts need to understand the reasoning behind the applications to support their decision-making. A majority of Explainable AI (XAI) models focus on supervised machine learning, yet the need for unsupervised machine learning explainability is still strong. Since the outcome of conventional unsupervised learning methods lacks the details of domain knowledge, it is difficult to be applied by industries. Moreover, the popular and classic explanation techniques, such as LIME and SHAP, cannot be directly utilized for unsupervised learning interpretation due to the absence of explicit labels or guidance in the datasets. In this work, we introduce an innovative clustering method, SATTree (SHAP-Augmented Threshold Tree), which leverages SHAP to gain a comprehensive understanding of global feature contributions, enhancing our knowledge in the clustering process, then builds up a hierarchical decision tree based on the knowledge of contribution ranking. The proposed approach can depict the properties of clusters with certain decision rule sets collected from the threshold tree. The outcomes of this method are anticipated to surpass expectations in facilitating improvements in real-world applications.
Loading