Abstract: Augmenting large language models (LLMs) with external tools is a promising approach to enhance their capabilities, especially for complex tasks.
Synthesizing tool-use data through real-world simulations is an effective way to achieve this.
However, our investigation reveals that training gains significantly decay as synthetic data increases.
The model struggles to benefit from more synthetic data, and it can not equip the model with advanced tool-use capabilities in complex scenarios.
Moreover, we discovered that the above limitation usually manifests as a fragment deficiency (i.e., parameter errors) in response.
To this end, we propose an iterative reinforced fine-tuning strategy designed to alleviate this limitation.
This strategy involves: (1) enhancing the diversity of response for synthetic data through path exploration of Monte Carlo Tree Search.
(2) iteratively pinpointing the model's deficiency by constructing fine-grained preference pairs, and then improving it by preference optimization algorithms for targeted improvement.
The experiments show that our method achieves 13.11\% better performance than the same-size base model. It achieves an improvement of 6.5\% in complex scenarios compared to the baseline, and it also outperforms larger open-source and closed-source models.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: LLM, tool use, preference learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Keywords: LLM, tool use, preference learning
Submission Number: 436
Loading