ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings

Published: 21 Sept 2023, Last Modified: 15 Jan 2024NeurIPS 2023 oralEveryoneRevisionsBibTeX
Keywords: large language model, tool learning
TL;DR: We propose to use tool embeddings to augment large language models with tools
Abstract: Integrating large language models (LLMs) with various tools has led to increased attention in the field. Existing approaches either involve fine-tuning the LLM, which is both computationally costly and limited to a fixed set of tools, or prompting LLMs by in-context tool demonstrations. Although the latter method offers adaptability to new tools, it struggles with the inherent context length constraint of LLMs when many new tools are presented, and mastering a new set of tools with few-shot examples remains challenging, resulting in suboptimal performance. To address these limitations, we propose a novel solution, named **ToolkenGPT**, wherein LLMs effectively learn to master tools as predicting tokens through **tool embeddings** for solving complex tasks. In this framework, each tool is transformed into vector embeddings and plugged into the language model head. Once the function is triggered during text generation, the LLM enters a special function mode to execute the tool calls. Our experiments show that function embeddings effectively help LLMs understand tool use and improve on several tasks, including numerical reasoning, knowledge-based question answering and embodied decision-making.
Supplementary Material: zip
Submission Number: 5556