Abstract: Minimum Bayes Risk (MBR) decoding has seen renewed interest as an alternative to traditional generation strategies.
While MBR has proven effective in machine translation, where the variability of a language model's outcome space is naturally constrained, it may face challenges in more open-ended tasks such as dialogue or instruction-following.
We hypothesise that in such settings, applying MBR with standard similarity-based utility functions may result in selecting responses that are broadly representative of the model's distribution, yet sub-optimal with respect to any particular grouping of generations that share an underlying latent structure.
In this work, we introduce three lightweight adaptations to the utility, designed to make MBR more sensitive to structural variability in the outcome space.
To test our hypothesis, we curate a dataset capturing three representative types of latent structure:
dialogue act, emotion, and response structure.
We also propose two metrics to evaluate the structural optimality of MBR.
Our analysis demonstrates that common utility functions fall short by these metrics. In contrast, our proposed adaptations considerably improve structural optimality.
Finally, we evaluate our approaches on real-world instruction-following benchmarks, AlpacaEval and MT-Bench, and show that increased structural sensitivity improves generation quality by up to 13.7
percentage points in win rate.
Paper Type: Long
Research Area: Generation
Research Area Keywords: Generation,Language Modeling,Machine Learning for NLP,Dialogue and Interactive Systems
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data resources
Languages Studied: English
Submission Number: 1793
Loading