Contrastive Search Is What You Need For Neural Text Generation

Published: 24 Feb 2023, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Generating text with autoregressive language models (LMs) is of great importance to many natural language processing (NLP) applications. Previous solutions for this task often produce text that contains degenerative expressions (Welleck et al., 2020) or lacks semantic consistency (Basu et al., 2021). Recently, Su et al. (2022b) introduced a new decoding method, contrastive search, based on the isotropic representation space of the language model and obtained new state of the art on various benchmarks. Additionally, Su et al. (2022b) argued that the representations of autoregressive LMs (e.g. GPT-2) are intrinsically anisotropic which is also shared by previous studies (Ethayarajh, 2019). Therefore, to ensure the language model follows an isotropic distribution, Su et al. (2022b) proposed a contrastive learning scheme, SimCTG, which calibrates the language model’s representations through additional training. In this study, we first answer the question: “Are autoregressive LMs really anisotropic?”. To this end, we extensively evaluate the isotropy of LMs across 16 major languages. Surprisingly, we find that the anisotropic problem only exists in the two specific English GPT-2-small/medium models. On the other hand, all other evaluated LMs are naturally isotropic which is in contrast to the conclusion drawn by previous studies (Ethayarajh, 2019; Su et al., 2022b). Based on our findings, we further assess the contrastive search decoding method using off-the-shelf LMs on four generation tasks across 16 languages. Our experimental results demonstrate that contrastive search significantly outperforms previous decoding methods without any additional training. More notably, on 12 out of the 16 evaluated languages, contrastive search performs comparably with human-level performances as judged by human evaluations.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: * 1. Added analysis experiments on the isotropy of LMs in the intermediate layers. (Appendix C in the revised manuscript.) * 2. Added experimental results on the machine translation task using encoder-decoder models. (Appendix I in the revised manuscript.) * 3. Added ablation study on the trade-off between k and alpha in contrastive search. (Appendix J in the revised manuscript.) * 4. Added a more detailed description on h_v in Section 2.2. * 5. Changed the column headers in the tables of human evaluations. * 6. Corrected the notation for top-k sampling in Figure 4. * 7. Softened the claim of "Autoregressive LMs are naturally isotropic" in the revised manuscript as highlighted in red. Specifically, the revisions can be found in the (1) Introduction section; (2) Section 3.1; (3) Section 3.2.; and (4) Conclusion section. * 8. Softened the claim of "indistinguishable with one written by a human" in the revised manuscript in Section 4.1.2. The revision is highlighted in red. * 9. Added a complete correlation study between isotropy and variance of degeneration penalty. (Appendix K in the revised manuscript.) * 10. Added comparison between Off-the-shelf and SimCTG using contrastive search. (Appendix L in the revised manuscript.) * 11. Added evaluation results on the machine translation task using the COMET metric. (Section 7 in the revised manuscript.)
Code: https://github.com/yxuansu/Contrastive_Search_Is_What_You_Need
Assigned Action Editor: ~Xu_Tan1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 553
Loading