Keywords: Explainability, Cautiousness, Unsupervised Classification, Evidential Clustering, Decision Trees, Interpretable AI, XAI
TL;DR: We developed a framework for understanding cautious explanations and their desirable properties; We proposed an interpretable and cautious algorithm for explaining evidential clustering.
Abstract: Unsupervised classification is a core problem in machine learning. Because real-world data are often imperfect, non-additive frameworks, such as evidential clustering, grounded in Dempster-Shafer theory, explicitly handle uncertainty and imprecision. These frameworks are particularly well suited to high-stakes decisions, which tend to require both interpretability and cautiousness. However, while decision-tree surrogates have enabled transparent explanations for hard clustering, explainability for evidential clustering remains largely unexplored. We address this gap by formalizing representativeness, a utility-based criterion that captures decision-makers' preferences over explanation misassignments, and introducing evidential mistakeness, a loss function tailored to credal partitions. Building on these foundations, we propose the Iterative Evidential Mistakeness Minimization (IEMM) algorithm, which learns decision-tree explainers for evidential clustering by optimizing representativeness under uncertainty and imprecision. We provide theoretical conditions for effective explanations in both hard and evidential settings and show how utility parameters can be set to reflect different decision attitudes. Experiments on synthetic and real-world datasets demonstrate that IEMM improves the performance of existing methods by producing representative and preference-aligned explanations of evidential clusterings, supporting cautious, transparent analysis in the presence of imperfect data.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 19653
Loading