Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Data Analysis, Supervised Fine-tuning for Human Alignment, Large Language Model
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Pre-trained large language models (LLMs) can understand and align with human instructions by supervised fine-tuning (SFT).
It is commonly believed that diverse and complex SFT data are of the essence to enable good instruction-following abilities.
However, such diversity and complexity are obscure and lack quantitative analyses.
In this work, we propose InsTag, an open-set instruction tagging method, to identify semantics and intentions of human instructions by tags that provide access to definitions and quantified analyses of instruction diversity and complexity.
We obtain 6.6K fine-grained tags to describe instructions from popular open-sourced SFT datasets comprehensively.
We find that the abilities of aligned LLMs benefit from more diverse and complex instructions in SFT data.
Based on this observation, we propose a data sampling procedure based on InsTag, and select 6K diverse and complex samples from open-source datasets for SFT.
The resulting models, TagLM, outperform open-source models based on considerably larger SFT data evaluated by MT-Bench, echoing the importance of instruction diversity and complexity and the effectiveness of InsTag.
InsTag has robust potential to be extended to more applications beyond the data selection as it provides an effective way to analyze the distribution of instructions.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Primary Area: representation learning for computer vision, audio, language, and other modalities
Submission Number: 1756
Loading