Simultaneous Translation with Flexible Policy via Restricted Imitation LearningDownload PDFOpen Website

2019 (modified: 06 Nov 2022)ACL (1) 2019Readers: Everyone
Abstract: Simultaneous translation is widely useful but remains one of the most difficult tasks in NLP. Previous work either uses fixed-latency policies, or train a complicated two-staged model using reinforcement learning. We propose a much simpler single model that adds a “delay” token to the target vocabulary, and design a restricted dynamic oracle to greatly simplify training. Experiments on Chinese <-> English simultaneous translation show that our work leads to flexible policies that achieve better BLEU scores and lower latencies compared to both fixed and RL-learned policies.
0 Replies

Loading