Abstract: Saliency methods attempt to explain deep neural networks by highlighting the most salient features of a sample. Some widely used methods are based on a theoretical framework called Deep Taylor Decomposition (DTD), which formalizes the recursive application of the Taylor Theorem to the network's layers. However, recent work has found these methods to be independent of the network's deeper layers and appear to respond only to lower-level image structure. Here, we investigate DTD theory to better understand this perplexing behavior and found that the Deep Taylor Decomposition is equivalent to the basic gradient$\times$input method when the Taylor root points (an important parameter of the algorithm chosen by the user) are locally constant. If the root points are locally input-dependent, then one can justify any explanation. In this case, the theory is under-constrained. In an empirical evaluation, we find that DTD roots do not lie the same linear regions as the input -- contrary to a fundamental assumption of the Taylor Theorem. The theoretical foundations of DTD were cited as a source of reliability for the explanations. However, our findings urge caution in making such claims.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: * Add link to code repository
Code: https://github.com/berleon/A-Rigorous-Study-Of-The-Deep-Taylor-Decomposition
Assigned Action Editor: ~Shiyu_Chang2
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 365
Loading