From Tags to Trees: Structuring Fine-Grained Knowledge for Controllable Data Selection in LLM Instruction Tuning
Keywords: Instruction Tuning, Data Selection, Fine-grained Knowledge Tree
Abstract: Effective and controllable data selection is critical for LLM instruction tuning, especially with massive open-source datasets.
Existing approaches primarily rely on instance-level quality scores, or diversity metrics based on embedding clusters or semantic tags.
However, constrained by the flatness of embedding spaces or the coarseness of tags, these approaches overlook fine-grained knowledge and its intrinsic hierarchical dependencies, consequently hindering precise data valuation and knowledge-aligned sampling.
To address this challenge, we propose Tree-aware Aligned Global Sampling (TAGS), a unified framework that leverages a knowledge tree built from fine-grained tags, thereby enabling joint control of global quality, diversity, and target alignment.
Using an LLM-based tagger, we extract atomic knowledge concepts, which are organized into a global tree through bottom-up hierarchical clustering.
By grounding data instances onto this tree, a tree-aware metric then quantifies data quality and diversity, facilitating effective sampling. Our controllable sampling strategy maximizes tree-level information gain and enforces leaf-level alignment via KL-divergence for specific domains.
Extensive experiments demonstrate that TAGS significantly outperforms state-of-the-art baselines. Notably, it surpasses the full-dataset model by \textbf{+5.84\%} using only \textbf{5\%} of the data, while our aligned sampling strategy further boosts average performance by \textbf{+4.24\%}.
Paper Type: Long
Research Area: Low-resource Methods for NLP
Research Area Keywords: Data-Efficient Training, LLM Efficiency
Contribution Types: NLP engineering experiment, Approaches low compute settings-efficiency, Data analysis
Languages Studied: English
Submission Number: 4424
Loading