Release Opt Out: No, I don't wish to opt out of paper release. My paper should be released.
Keywords: Machine Learning, Explainable AI, Data Attribution
Abstract: In recent years, training data attribution (TDA) methods have emerged as a promising direction for the interpretability of neural networks. While research around TDA is thriving, limited effort has been dedicated to the evaluation of attributions. Similar to the development of evaluation metrics for the traditional feature attribution approaches, several standalone metrics have been proposed to evaluate the quality of TDA methods across various contexts. However, the lack of a unified framework that allows for systematic comparison limits the trust in TDA methods and stunts their widespread adoption. To address this research gap, we introduce **quanda**, a Python toolkit designed to facilitate the evaluation of TDA methods. Beyond offering a comprehensive set of evaluation metrics, **quanda** provides a uniform interface for seamless integration with existing TDA implementations across different repositories, thus enabling systematic benchmarking. The toolkit is user-friendly, thoroughly tested, well-documented, and available as an open-source library under https://github.com/dilyabareeva/quanda.
Submission Number: 78
Loading