Cost-Efficient Multi-Fidelity Alignment for LLMs

ICLR 2025 Conference Submission12500 Authors

27 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-Fidelity, Alignment, LLM
TL;DR: A LLM alignment approach utilizes different qualities of responses to save the total alignment cost.
Abstract: Alignment is a critical step in large language model (LLM) post-training. It typically requires human annotations to align the model's output to human preferences, which is prohibitively expensive. This paper proposes a novel approach to reduce the alignment cost. Specifically, we consider multiple levels of alignment with different qualities and response-generating costs, which we refer to as multi-fidelity alignment. We develop a new approach to incorporating the varying levels of response quality to train a language model, aiming to reduce the cost of response collection for alignment while maintaining the performance of the language model. We provide theoretical insights and empirical results to support the effectiveness of the proposed multi-fidelity alignment approach. Lastly, we conduct experiments to corroborate the effectiveness of the proposed approach by comparing its performance with the vanilla alignment methods.
Primary Area: foundation or frontier models, including LLMs
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 12500
Loading