Investigating and Explaining Feature and Representation Learning in Translationese ClassificationDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: Recent work has shown that neural feature- and representation-learning approaches, and specifically the BERT model, demonstrates superior performance over traditional manual feature engineering and an SVM classifier for the task of translationese classification for various source and target languages. However, to date it is unclear whether the performance differences are due to better representations, better classifiers or both. Moreover, it remains unclear whether the features learnt by BERT overlap with commonly used manual features. To answer these, we exchange features between BERT-based and SVM classifiers, and show that, an SVM fed with BERT representations performs at the level of the best BERT classifiers, and BERT learning and using hand-crafted features performs at the level of traditional classifiers using hand-crafted features. Our experiments indicate that our hand-crafted feature set does not provide any additional information that BERT has not learnt already, and is likely to be a subset of features automatically learnt by BERT. Finally, we apply Integrated Gradients to examine token importance for the BERT model, and find that part of its top performance results are due to just topic differences and spurious correlations with translationese.
Paper Type: long
0 Replies

Loading