EnterpriseBench: Simulating Enterprise Environments for Testing and Evaluating LLM-based Agents

ACL ARR 2025 February Submission4060 Authors

15 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract:

Enterprise systems are crucial for enhancing productivity and decision-making among employees and customers. Integrating LLM based systems into enterprise systems enables intelligent automation, personalized experiences, and efficient information retrieval, driving operational efficiency and strategic growth. However, developing and evaluating such systems is challenging due to the inherent complexity of enterprise environments, where data is fragmented across multiple sources and governed by sophisticated access controls. We present EnterpriseBench, a comprehensive benchmark that simulates realistic enterprise settings, featuring 550 diverse tasks across software engineering, HR, finance, and administrative domains. Our benchmark uniquely captures key enterprise characteristics including data source fragmentation, access control hierarchies, and cross-functional workflows. Additionally, we provide a novel data generation pipeline that creates internally consistent enterprise datasets from organizational metadata. Experiments with state-of-the-art LLM agents demonstrate that even the most capable models achieve only 21.5% task completion, highlighting significant opportunities for improvement in enterprise-focused AI systems.

Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: Resources and Evaluation
Contribution Types: Data resources
Languages Studied: English
Submission Number: 4060
Loading