Rethinking the Evaluation of Alignment Methods: Insights into Diversity, Generalisation, and Safety

ACL ARR 2025 May Submission6837 Authors

20 May 2025 (modified: 03 Jul 2025)ACL ARR 2025 May SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) require careful alignment to balance competing objectives—factuality, safety, conciseness, proactivity, and diversity. Existing studies focus on individual techniques or specific dimensions, lacking a holistic assessment of the inherent trade-offs. We propose a unified evaluation framework that compares LLM alignment methods (PPO, DPO, ORPO, KTO) across these five axes, using both in-distribution and out-of-distribution datasets. Leveraging a specialized LLM‑as‑Judge prompt, validated through human studies, we reveal that DPO and KTO excel in factual accuracy, PPO and DPO lead in safety, and PPO best balances conciseness with proactivity. Our findings provide insights into trade-offs of common alignment methods, guiding the development of more balanced and reliable LLMs.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: alignment, generalisation, safety, LLM, evaluation
Contribution Types: Model analysis & interpretability, Reproduction study
Languages Studied: english
Submission Number: 6837
Loading