Nearest Neighbor Machine Translation is Meta-Optimizer on Output Projection Layer

Published: 07 Oct 2023, Last Modified: 01 Dec 2023EMNLP 2023 MainEveryoneRevisionsBibTeX
Submission Type: Regular Long Paper
Submission Track: Machine Translation
Submission Track 2: Interpretability, Interactivity, and Analysis of Models for NLP
Keywords: Nearest Neighbor Machine Translation, meta-optimization, domain adaptation, Neural Machine Translation
TL;DR: We provide insights into the working mechanism of kNN-MT as a specific case of fine-tuning, which implicitly execute gradient descent on output projection layer of NMT and conduct experiments to compare the performance of kNN-MT and fine-tuning.
Abstract: Nearest Neighbor Machine Translation ($k$NN-MT) has achieved great success in domain adaptation tasks by integrating pre-trained Neural Machine Translation (NMT) models with domain-specific token-level retrieval. However, the reasons underlying its success have not been thoroughly investigated. In this paper, we comprehensively analyze $k$NN-MT through theoretical and empirical studies. Initially, we provide new insights into the working mechanism of $k$NN-MT as an efficient technique to implicitly execute gradient descent on the output projection layer of NMT, indicating that it is a specific case of model fine-tuning. Subsequently, we conduct multi-domain experiments and word-level analysis to examine the differences in performance between $k$NN-MT and entire-model fine-tuning. Our findings suggest that: ($i$) Incorporating $k$NN-MT with adapters yields comparable translation performance to fine-tuning on in-domain test sets, while achieving better performance on out-of-domain test sets; ($ii$) Fine-tuning significantly outperforms $k$NN-MT on the recall of in-domain low-frequency words, but this gap could be bridged by optimizing the context representations with additional adapter layers.
Submission Number: 1607
Loading