Cheat Codes to Quantify Missing Source Information in Neural Machine TranslationDownload PDF

Anonymous

16 Jan 2022 (modified: 05 May 2023)ACL ARR 2022 January Blind SubmissionReaders: Everyone
Abstract: This paper describes a method to quantify the amount of information $H(t|s)$ added by the target sentence $t$ that is not present in the source $s$ in a neural machine translation system. We do this by providing the model the target sentence in a highly compressed form (a "cheat code"), and exploring the effect of the size of the cheat code. We find that the model is able to capture extra information from just a single float representation of the target and nearly reproduces the target with two 32-bit floats per target token.
Paper Type: short
0 Replies

Loading