Keywords: LLM, datasets, tool use, open source
Abstract: Through the integration of external tools, large language models (LLMs) such as GPT-4o and Llama 3.1 significantly expand their functional capabilities, evolving from elementary conversational agents to general-purpose assistants. We contend that the primary drivers of these advancements are the quality and diversity of the training data. However, the existing LLMs with external tool integration provide only limited transparency regarding their datasets and data collection approaches, which has led to the initiation of this study. Specifically, in this work, we endeavor to present a detailed exposition of the methodology for constructing datasets that facilitate LLMs in effectively learning how to utilize external tools and make this process available to the public through the introduction of ToolBridge. ToolBridge proposes to leverage a collection of general open-access datasets as its raw dataset pool and incorporates a series of strategies to identify the appropriate data entries for external tool API insertions. By supervised fine-tuning (SFT) on these curated data entries, LLMs can invoke external tools in appropriate contexts to boost their predictive accuracy, particularly for essential functions including factual retrieval, data processing and numerical computation. Our experiments meticulously isolate model architectures and training configurations, zeroing in exclusively on the role of data. The experimental results indicate that LLMs trained on ToolBridge exhibit consistent performance gains on both standard benchmarks and custom evaluation datasets. All associated code and data will be released as open source, promoting transparency and facilitating the broader community to explore methodologies for equipping LLMs with external tools capabilities.
Primary Area: datasets and benchmarks
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1168
Loading