Learning to Deceive with Attention-Based ExplanationsDownload PDF

31 Jan 2021 (modified: 05 May 2023)ML Reproducibility Challenge 2020 Blind SubmissionReaders: Everyone
Keywords: Attention
Abstract: Scope of Reproducibility 6 The authors of our paper claim that attention weights can easily be manipulated without significant accuracy loss and 7 that human subjects can be deceived by these attention weights. We will attempt to reproduce the former. 8 Methodology 9 We used their code which was publicly available on github. Their data was also included. We also utilised a cluster 10 computer for its GPU performance. This was provided by the University of Amsterdam. 11 Results 12 Our results do reproduce the original results fairly well. There are some minor divergences, but nothing too significant 13 that it would not uphold the authors claims. We have been able to reproduce 90% of the results within the error margins 14 produced by differently seeded runs. 15 What was easy 16 The experience of the reproduction was relatively smooth overall. With very minor changes almost all code could run. 17 The code was wel documented and well structured. 18 What was difficult 19 There was some missing code concerning the BERT model and masking functions. These posed a problem. Also, some 20 data that the authors used was private. This prevented us of reproducing that part. 21 Communication with original authors 22 The communication with the authors was quick and to the point. They were able to help us with some of the missing 23 code.
Paper Url: https://openreview.net/forum?id=ZVxchkVPa8S&noteId=WBCM_B1YwVR&referrer=%5BML%20Reproducibility%20Challenge%202020%5D(%2Fgroup%3Fid%3DML_Reproducibility_Challenge%2F2020)
4 Replies

Loading