Abstract: Data Attribution (DA) is an emerging approach in the field of eXplainable Artificial Intelligence (XAI), aiming to identify influential training datapoints which determine model outputs. It seeks to provide transparency about the model and individual predictions, e.g. for model debugging, identifying data-related causes of suboptimal performance. However, existing DA approaches suffer from prohibitively high computational costs and memory demands when applied to even medium-scale datasets and models, forcing practitioners to resort to approximations that may fail to capture the true inference process of the underlying model. Additionally, current attribution methods exhibit low sparsity, resulting in non-negligible attribution scores across a high number of training examples, hindering the discovery of decisive patterns in the data. In this work, we introduce DualXDA, a framework for sparse, efficient and explainable DA, comprised of two interlinked approaches, Dual Data Attribution (DualDA) and eXplainable Data Attribution (XDA): With DualDA, we propose a novel approach for efficient and effective DA, leveraging Support Vector Machine theory to provide fast and naturally sparse data attributions for AI predictions. In extensive quantitative analyses, we demonstrate that DualDA achieves high attribution quality, excels at solving a series of evaluated downstream tasks, while at the same time improving explanation time by a factor of up to 4,100,000x compared to the original Influence Functions method, and up to 11,000x compared to the method's most efficient approximation from literature to date. We further introduce XDA, a method for enhancing Data Attribution with capabilities from feature attribution methods to explain why training samples are relevant for the prediction of a test sample in terms of impactful features, which we showcase and verify qualitatively in detail. Taken together, our contributions in DualXDA ultimately point towards a future of eXplainable AI applied at unprecedented scale, enabling transparent, efficient and novel analysis of even the largest neural architectures -- such as Large Language Models -- and fostering a new generation of interpretable and accountable AI systems. The implementation of our methods, as well as the full experimental protocol, is available on github.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: - **Added hyperparameter recommendation (Section 4.2):** Included guidance on the default setting of $C=10^{-3}$ as the recommended hyperparameter choice for image classification tasks
- **Restructured sparsity analysis (Section 4.4):** Rewrote this section to unify the perspective on positive and negative attributions by considering the sparsity of absolute attributions, thereby streamlining the analysis of DualDA's sparsity properties
- **Added detailed sparsity analysis (Appendix F.7):** Provided an expanded analysis examining sparsity for positive and negative attributions separately (a reworked version of the former Section 4.4)
- **Added preliminary LLM results (Appendix I):** Included preliminary findings regarding the application of DualXDA to large language models, specifically addressing resource efficiency and shortcut detection experiments
- **Refined claims regarding LLM applicability:** Adjusted the language throughout the manuscript to clarify that while preliminary results for LLMs are promising, a comprehensive evaluation remains necessary and will be pursued in future work
- **Enhanced readability:** Made minor revisions in teh Abstract, Section 1, and Section 2 to improve clarity and conciseness
--------------------------
- **Correction of reported values in Table I.1:** Corrected two values in Table I.1 (EK-FAC caching time and factor of improvement) to ensure consistency with other results, which increases DualDA’s competitive edge but does not affect the paper’s qualitative conclusions or comparisons.
Assigned Action Editor: ~Steffen_Schneider1
Submission Number: 5451
Loading