Abstract: Challenges remain in providing interpretable explanations for neural network decision-making in explainable AI (xAI). Existing methods like Integrated Gradients produce noisy maps, and LIME, while intuitive, may deviate from the model’s internal logic. We introduce a framework that uses hierarchical segmentation techniques for faithful and interpretable explanations of Convolutional Neural Networks (CNNs). Our method constructs model-based hierarchical segmentations that maintain fidelity to the model’s decision-making process and allow both human-centric and model-centric segmentation. This approach can be combined with various xAI methods and provides multiscale explanations that help identify biases and improve understanding of neural network predictive behavior. Experiments show that our framework, xAiTrees, delivers highly interpretable and faithful model explanations, not only surpassing traditional xAI methods but shedding new light on a novel approach to enhancing xAI interpretability.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: We implemented the changes recommended by the Editor as follows:
- We revised the introduction, particularly the first two pages, to more clearly emphasize the human-centered studies presented in the paper;
- We updated the list of contributions to explicitly reflect the qualitative evaluation conducted with human participants, as well as the quantitative metrics employed;
- We expanded the limitations section to discuss task and dataset diversity, as suggested by the Editor.
Video: https://youtu.be/QKdrBsOUxPU
Code: https://github.com/CarolMazini/reasoning_with_trees
Assigned Action Editor: ~Karthikeyan_Shanmugam1
Submission Number: 4440
Loading