Using Sub-character Level Information for Neural Machine Translation of Logographic LanguagesOpen Website

15 Nov 2021 (modified: 15 Nov 2021)OpenReview Archive Direct UploadReaders: Everyone
Abstract: Logographic and alphabetic languages (e.g., Chinese vs. English) have different writing systems linguistically. Languages belonging to the same writing system usually exhibit more sharing information, which can be used to facilitate natural language processing tasks such as neural machine translation (NMT). This paper takes advantage of the logographic characters in Chinese and Japanese by decomposing them into smaller units, thus more optimally utilizing the information these characters share in the training of NMT systems in both encoding and decoding processes. Experiments show that the proposed method can robustly improve the NMT performance of both “logographic” language pairs (JA–ZH) and “logographic + alphabetic” (JA–EN and ZH–EN) language pairs in both supervised and unsupervised NMT scenarios. Moreover, as the decomposed sequences are usually very long, extra position features for the transformer encoder can help with the modeling of these long sequences. The results also indicate that theoretically, linguistic features can be manipulated to obtain higher share token rates and further improve the performance of natural language processing systems.
0 Replies

Loading