Abstract: Enhancing the task-solving capabilities of large language models (LLMs) through utilizing tools has garnered increasing attention. To enable LLMs to use tools accurately, developers often provide documentation of the tools in the LLMs' context. However, such documentation has various issues, such as incomplete tool descriptions and insufficient descriptions of parameters or responses. To address this, we propose ToolBFS+, a method to revise tool documentation by exploring the use of tools. ToolBFS+ adopts a Breadth-First Search (BFS) strategy to explore various tool usage scenarios and collects the information obtained from the exploration to revise the tool documentation, ultimately improving the model's ability to accurately utilize the tools. Extensive experiments on multiple datasets demonstrate that the ToolBFS+ method can substantially reduce errors, such as the selection of incorrect tools, and improve the capability of LLMs to use tools accurately.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: prompting, few-shot generation, few-shot learning
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 155
Loading