Keywords: Large Language Model, Multi-Agent System, Failure Attribution, Benchmark
Abstract: Failure attribution, i.e., identifying the responsible agent and decisive step of a failure, is particularly challenging in LLM-based multi-agent systems (MAS) due to their natural-language reasoning, nondeterministic outputs, and intricate interaction dynamics. A reliable benchmark is therefore essential to guide and evaluate attribution techniques. Yet existing benchmarks rely on partially observable traces that capture only agent outputs, omitting the inputs and context that developers actually use when debugging.
We argue that attribution should be studied under full execution observability, aligning with real-world developer-facing scenarios where complete traces, rather than only outputs, are accessible for diagnosis.
To this end, we introduce TraceElephant, a benchmark designed for failure attribution with full execution traces and reproducible environments.
We then systematically evaluate failure attribution techniques across various configurations.
Specifically, full traces improve attribution accuracy by up to 76.5% over a partial-observation counterpart, confirming that missing inputs obscure many failure causes.
TraceElephant provides a foundation for follow-up failure attribution research, promoting evaluation practices that reflect real-world debugging and supporting the development of more transparent MASs.
Paper Type: Long
Research Area: Resources and Evaluation
Research Area Keywords: benchmarking,evaluation methodologies,evaluation,reproducibility
Languages Studied: English
Submission Number: 9260
Loading