RELIC: Evaluating Compositional Instruction Following via Language Recognition

Published: 24 Sept 2025, Last Modified: 24 Sept 2025NeurIPS 2025 LLM Evaluation Workshop PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: LLMs, instruction following, formal language, language recognition, test-time compute, chain-of-thought, complexity
TL;DR: We evaluate frontier LLMs on in-context recognition of context-free languages, finding that models fail when grammars and strings become complex.
Abstract: Large language models (LLMs) are increasingly expected to perform tasks based only on a specification of the task provided in context, without examples of inputs and outputs; this ability is referred to as instruction following. To evaluate models' ability to perform a task provided in context, we introduce the Recognition of Languages In-Context (RELIC) framework, where the task is to determine if a string is generated by a context-free grammar. This requires composing together a large number of instructions (grammar productions) retrieved from the context. Because the languages are synthetic, the task can be increased in complexity as LLMs' skills improve in the future, and new instances can be automatically generated, mitigating data contamination concerns. We evaluate state-of-the-art LLMs on RELIC and find that their accuracy can be reliably predicted from the complexity of the grammar and the individual example strings, and that even the most advanced LLMs currently available show near-chance performance on more complex grammars and samples, in line with theoretical expectations. We also analyze how LLMs attempt to solve increasingly difficult reasoning tasks, and find that as the complexity of the language recognition task increases, models switch from following complex instructions to relying on shallow heuristics.
Submission Number: 128
Loading