Exploring Local Feature Influences with Hierarchical Explanation Trees

Published: 2025, Last Modified: 15 Feb 2026ADBIS 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: When applying clustering techniques to data exploration, ensuring the practical usefulness of clusters by aligning them with expert knowledge is highly desirable. A recent approach, known as supervised clustering, addresses this by selecting a target feature and constructing a Target Explanation Space (TES) using a supervised model combined with local feature attribution methods such as LIME or SHAP. While TES enhances clustering performance, its lack of interpretability remains a significant limitation. To address this, we introduce the first hierarchical supervised clustering pipeline that generates interpretable, non-overlapping rules directly in the original data space—while still leveraging the improved clustering achieved in TES. Experimental results demonstrate that the rules produced are not only more concise but also more comprehensive than those from existing methods, enabling experts to effectively balance interpretability and predictive accuracy.
Loading