Abstract: Large language models (LLMs) excel at solving problems with clear and complete statements, but
often struggle with nuanced environments or interactive tasks which are common in most real-world
scenarios. This highlights the critical need for developing LLMs that can effectively engage in logically
consistent multi-turn dialogue, seek information and reason with incomplete data. To this end, we
introduce a novel benchmark comprising a suite of multi-turn tasks each designed to test specific
reasoning, interactive dialogue, and information-seeking abilities. These tasks have deterministic
scoring mechanisms, thus eliminating the need for human intervention. Evaluating frontier models on
our benchmark reveals significant headroom. Our analysis shows that most errors emerge from poor
instruction following, reasoning failures, and poor planning. This benchmark provides valuable insights
into the strengths and weaknesses of current LLMs in handling complex, interactive scenarios and offers
a robust platform for future research aimed at improving these critical capabilities.
Loading