Abstract: Highlights•We create a benchmark to study compositional generalization of translation models.•Novel compounds pose a challenge on Transformer models including pretrained ones.•Composition exposure during pretraining influences generalization performance.
Loading