Abstract: Recent advancements in integrating large language models (LLMs) with tools have allowed the models to interact with real-world environments.
However, these tool-augmented LLMs often encounter incomplete scenarios when users provide partial information or the necessary tools are unavailable.
Recognizing and managing such scenarios is crucial for LLMs to ensure their reliability, but this exploration remains understudied.
This study examines whether LLMs can identify incomplete conditions and appropriately determine when to refrain from using tools.
To this end, we address a dataset by manipulating instances from two datasets by removing necessary tools or essential information for tool invocation.
We confirm that most LLMs are challenged to identify the additional information required to utilize specific tools and the absence of appropriate tools.
Our research can contribute to advancing reliable LLMs by addressing scenarios that commonly arise during interactions between humans and LLMs.
Our code and dataset will be made publicly available.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: NLP datasets, evaluation methodologies, evaluation
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 4574
Loading