Reproducibility study - Does enforcing diversity in hidden states of LSTM-Attention models improve transparency?
Keywords: Attention, NLP, Transparency, Explainability, Faithfulness, Plausibility, Reproducibility, LSTM
Abstract: It has been shown that the weights in attention mechanisms do not necessarily offer a faithful explanation of the model's predictions. In the paper 'Towards Transparent and Explainable Attention Models' (Mohankumar et al, 2020), the authors propose two methods that enhance faithfulness and plausibility of the explanations provided by an LSTM model combined with a basic attention mechanism.
For this reproducibility study, we focus on the main claims made in this paper:
- The attention weights in standard LSTM attention models do not provide faithful and plausible explanations for its predictions. This is potentially because the conicity of the LSTM hidden vectors is high.
- Two methods can be applied to reduce conicity: Orthogonalization and Diversity Driven Training. When applying these methods, the resulting attention weights offer more faithful and plausible explanations of the model's predictions, without sacrificing model performance.
Methodology
The paper includes a link to a repository with the code used to generate its results. All our experiments with this code are conducted on GPU nodes of the Lisa Cluster at SURFsara. We had access to two Nvdia GTX1080Ti/11Gb VRAM GPUs. We follow four investigative routes:
(i) Replication: we rerun experiments on datasets from the paper in order to replicate the results, and add the results that are missing in the paper;
(ii) Code review: we scrutinize the code to validate its correctness;
(iii) Evaluation methodology: we extend the set of evaluation metrics used in the paper with the LIME method, in an attempt to resolve inconclusive results;
(iv) Generalization to other architectures: we test whether the authors' claims apply to variations of the base model (more complex forms of attention and a bi-LSTM encoder).
Results
We confirm that the Orthogonal and Diversity LSTM achieve similar accuracies as the Vanilla LSTM, while lowering conicity. However, we cannot reproduce the results of several of the experiments in the paper that underlie their claim of better transparency. In addition, a close inspection of the code base reveals some potentially problematic inconsistencies. Despite this, under certain conditions, we do confirm that the Orthogonal and Diversity LSTM can be useful methods to increase transparency.
How to formulate these conditions more generally remains unclear and deserves further research.
The single input sequence tasks appear to benefit most from the methods. For these tasks, the attention mechanism doesn't play a critical role for achieving performance.
What was easy / difficult
The codebase of the authors is accessible and can be run easily, with good facilities to prepare datasets and define configurations. The Orthogonalization and Diversity Training methods are well explained in the paper and mostly cleanly implemented. The larger datasets (Amazon and CNN) are difficult to run due to memory requirements and compute times. The codebase can be hard to navigate, a consequence of the choice to accommodate a large variation of models and datasets in one framework.
Communication with original authors
We reached out to the authors on a fundamental but unexplained choice in the model architecture but unfortunately did not hear back before the deadline of our assignment.
Paper Url: https://openreview.net/forum?id=ykG2B9bWiPXe&referrer=%5BML%20Reproducibility%20Challenge%202020%5D(%2Fgroup%3Fid%3DML_Reproducibility_Challenge%2F2020)
3 Replies
Loading