Abstract: Neural metrics trained on human evaluations
of MT tend to correlate well with human judgments, but their behavior is not fully understood. In this paper, we perform a controlled
experiment and compare a baseline metric that
has not been trained on human evaluations
(Prism) to a trained version of the same metric (Prism+FT). Surprisingly, we find that
Prism+FT becomes more robust to machinetranslated references, which are a notorious
problem in MT evaluation. This suggests that
the effects of metric training go beyond the intended effect of improving overall correlation
with human judgments.
0 Replies
Loading