Step-On-Feet Tuning: Scaling Self-Alignment of LLMs via Bootstrapping

Published: 17 Jun 2024, Last Modified: 02 Jul 2024ICML 2024 Workshop MHFAIA PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models; Alignment; Self-alignment; AI alignment
Abstract: Self-alignment is an effective way to reduce the cost of human annotation while ensuring promising model capability. However, existing self-alignment methods utilize the pretrained LLM to generate alignment datasets in a few-shot manner, which gives rise to a question: Is the pretrained LLM the better few-shot generator rather than its aligned version? If not, to what extent could the aligned LLM continue providing benefits? In this paper, our pioneering exploration delves into the impact of bootstrapping self-alignment on large language models. We find the key role of in-context learning (ICL) examples, which serves as the only fresh data in this self-training loop and should be as much diverse and informative as possible. Our findings reveal that bootstrapping self-alignment markedly surpasses the single-round approach. To further exploit the capabilities of bootstrapping, we investigate and adjust the training order of data, which yields improved performance of the model. We discuss the collapse phenomenon in the later stage and offer two viewpoints: Data Processing Inequality and Sharper Output Distribution along with corresponding empirical study for explanation. Based on this, we give a validation dataset for early stop in case of further model collapse. We propose Step-On-Feet Tuning (SOFT) which leverages model's continuously enhanced few-shot ability to boost zero or one-shot performance, shedding light on the ignored potential of continually enhancing model self-alignment performance.
Submission Number: 11
Loading