Reformatted AlignmentDownload PDF

Anonymous

16 Feb 2024ACL ARR 2024 February Blind SubmissionReaders: Everyone
Abstract: The quality of finetuning data is crucial for aligning large language models (LLMs) with human values.Current methods to improve data quality are either labor-intensive or prone to factual errors caused by LLM hallucinations.This paper explores elevating the quality of existing instruction data to better align with human values, introducing a simple and effective approach named ReAlign, which \textit{reformats} the responses of instruction data into a format that better aligns with pre-established criteria and the collated evidence.This approach minimizes human annotation, hallucination, and the difficulty in scaling, remaining orthogonal to existing alignment techniques.Experimentally, ReAlign significantly boosts the general alignment ability, math reasoning, factuality, and readability of the LLMs.Encouragingly, \emph{without} introducing any additional data or advanced training techniques, and merely by reformatting the response, LLaMA-2-13B's mathematical reasoning ability on \texttt{GSM8K} can be improved \textbf{from 46.77\% to 56.63\%} in accuracy.Additionally, a mere 5\% of \modelname data yields a 67\% boost in general alignment ability measured by the Alpaca dataset. This work highlights the need for further research into the \emph{science} and \emph{mechanistic interpretability} of LLMs. We have made the associated code and data publicly accessible to support future studies at https://anonymous.4open.science/r/ReAlign-9B3D.
Paper Type: long
Research Area: Generation
Contribution Types: NLP engineering experiment, Approaches to low-resource settings, Approaches low compute settings-efficiency, Data resources
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview