MAD for Robust Reinforcement Learning in Machine TranslationDownload PDF

Anonymous

16 Nov 2021 (modified: 05 May 2023)ACL ARR 2021 November Blind SubmissionReaders: Everyone
Abstract: We introduce a new distributed policy gradient algorithm and show that it outperforms existing reward-aware training procedures such as REINFORCE, minimum risk training (MRT) and proximal policy optimization (PPO) in terms of convergence speed and stability, and overall performance at optimising machine translation models. Our algorithm, which we call MAD (on account of using the mean absolute deviation in the importance weighting calculation), has distributed data generators sampling multiple candidates per source sentence on worker nodes, while a central learner updates the policy. MAD depends crucially on two variance reduction strategies: (1) a new robust importance weighting scheme that encourages learning from examples that are not too likely or unlikely relative to the current policy and (2) by learning from balanced numbers of high- and low-reward training examples. Finally, our algorithm has few hyperparameters, making it easy to use on new tasks with little or no adaptation. Experiments on a variety of tasks show the translation policies learned with MAD perform very well with both greedy decoding and beam search, and the learned policies are sensitive to the specific reward used during training.
0 Replies

Loading