ExDBSCAN: Explaining DBSCAN with Counterfactual Reasoning

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Explainability, Density-Based Clustering, Interpretability, Unsupervised Learning, Counterfactual Explanations
TL;DR: We propose a novel method for obtaining counterfactual explanations for DBSCAN clustering, ensuring proximity, validity, diversity and actionability.
Abstract: Clustering is an unsupervised technique for grouping data points by similarity. While explainability methods exist for supervised machine learning, they are not directly applicable to clustering, making it challenging to understand cluster assignments. This interpretability gap is evident in the popular density-based method DBSCAN, which assigns points as inliers (cluster members in dense regions) or outliers (noise points in sparse regions). DBSCAN does not provide insight into why a particular point receives its assignment or if it is robust to small changes in the data. To address the challenges, we introduce ExDBSCAN, a density-aware, post-hoc explanation method. ExDBSCAN offers actionable counterfactual explanations, with theoretical guarantees for validity. It generates multiple counterfactuals using a density-connected weighted graph while adopting a physics-inspired model that repels counterfactual candidates from one another (diversity) while pulling them toward the instance to explain (proximity). Empirical evaluation on 30 tabular datasets, confirms that ExDBSCAN attains perfect validity and shows that it retrieves diverse, proximal counterfactuals.
Primary Area: interpretability and explainable AI
Submission Number: 9176
Loading