Abstract: Explaining what part of the input images primarily contributed to the predicted classification results by deep models has been widely researched over the years and many effective methods have been reported in the literature, for which deep Taylor decomposition (DTD) served as the primary foundation due to its advantage in theoretical explanations brought in by Taylor expansion and approximation. Recent research, however, has shown that the root of Taylor decomposition could extend beyond local linearity, and thus causing DTD to fail in delivering expected performances. In this paper, we propose a universal root inference method to overcome the shortfall and strengthen the roles of DTD in explainability and interpretability of deep classifications. In comparison with the existing approaches, our proposed features in: (i) theoretical establishment of the relationship between ideal roots and the propagated relevances; (ii) exploitation of gradient descents in learning a universal root inference; and (iii) constrained optimization of its final root selection. Extensive experiments, including both quantitative and qualitative, validate that our proposed root inference is not only effective, but also delivers significantly improved performances in explaining a range of deep classifiers.
Primary Subject Area: [Content] Vision and Language
Secondary Subject Area: [Content] Vision and Language
Relevance To Conference: The interpretable methods help us comprehend how deep learning models analyze and understand multimedia data. In the multimedia domain, the interpretability method has made the following contributions to the development and application of multimedia technology:
i) Human-computer interaction and user experience: In the multimedia field, by explaining the decision-making basis of the model to the user, the user can better understand the behavior of the model and enhance the comprehensibility and user experience of human-computer interaction.
ii) Legal and ethical issues: In sensitive areas, such as security monitoring or medical diagnosis, the decision-making process and prediction results of deep learning models need to be transparent and explainable. The explainability approach can help us meet legal and ethical requirements and ensure that the decision-making process of the model is traceable and explainable.
iii) Reliability evaluation of models and multimedia data.
Supplementary Material: zip
Submission Number: 674
Loading