Abstract: In recent years, multi-agent frameworks powered by large language models (LLMs) have advanced rapidly. Despite this progress, there is still a notable absence of benchmark datasets specifically tailored to evaluate their performance. To bridge this gap, we introduce Auto-SLURP, a benchmark dataset aimed at evaluating LLM-based multi-agent frameworks in the context of smart personal assistants. Auto-SLURP extends the original SLURP dataset—initially developed for natural language understanding tasks—by relabeling the data and integrating simulated servers and external services. This enhancement enables a comprehensive end-to-end evaluation pipeline, covering language understanding, task execution, and response generation. Our experiments demonstrate that Auto-SLURP presents a significant challenge for current state-of-the-art frameworks, highlighting that truly reliable and intelligent multi-agent personal assistants remain a work in progress.
Paper Type: Short
Research Area: Resources and Evaluation
Research Area Keywords: large language model, multi-agent framework, smart personal assistant, dataset benchmark
Contribution Types: Data resources
Languages Studied: English
Keywords: large language model, multi-agent framework, smart personal assistant, dataset benchmark
Submission Number: 1997
Loading