Abstract: In the field of artificial intelligence, AI models are frequently described as ‘black boxes’ due to the obscurity of their internal mechanisms. It has ignited research interest on model interpretability, especially in attribution methods that offers precise explanations of model decisions. Current attribution algorithms typically evaluate the importance of each parameter by exploring the sample space. A large number of intermediate states are introduced during the exploration process, which may reach the model’s Out-of-Distribution (OOD) space. Such intermediate states will impact the attribution results, making it challenging to grasp the relative importance of features. In this paper, we firstly define the local space and its relevant properties, and we propose the Local Attribution (LA) algorithm that leverages these properties. The LA algorithm comprises both targeted and untargeted exploration phases, which are designed to effectively generate intermediate states for attribution that thoroughly encompass the local space. Compared to the state-of-the-art attribution methods, our approach achieves an average improvement of 38.21% in attribution effectiveness. Extensive ablation studies within our experiments also validate the significance of each component in our algorithm. Our code is available at: https://anonymous.4open.science/r/LA-2024
Primary Subject Area: [Content] Media Interpretation
Relevance To Conference: Our work on attribution methods helps in breaking down the decision-making process of AI models, especially those dealing with multimedia content such as images, videos, and audio. By providing clear explanations of how different elements within multimedia content influence model decisions, our research enhances the understanding of complex multimedia data, making these models more transparent and interpretable.
Supplementary Material: zip
Submission Number: 3678
Loading