RIFT: Reordered Instruction Following Testbed To Evaluate Instruction Following in Singular Multistep Prompt Structures

ACL ARR 2026 January Submission6571 Authors

05 Jan 2026 (modified: 20 Mar 2026)ACL ARR 2026 January SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Instruction Following, Non-sequential Instructions, Prompt Structural Sensitivity, LLM Evaluation
Abstract: Large Language Models (LLMs) are increasingly relied upon for complex, multistep workflows, yet their ability to maintain control flow of instructions remains underexplored. Existing benchmarks often conflate task complexity with structural ordering, making it difficult to isolate the impact of prompt topology on performance. We introduce RIFT, Reordered Instruction Following Testbed, to assess instruction following by disentangling structure from content. Using rephrased Jeopardy! question-answer pairs, we test LLMs across two formally defined prompt structures: linear prompts, which progress sequentially, and jumping prompts, which preserve identical content but require non-sequential traversal. Across 10,000 evaluations spanning six state-of-the-art open-source LLMs, accuracy dropped by up to 72% under jumping conditions (compared to baseline), revealing a strong dependence on positional continuity. Error analysis shows that approximately 50% of failures stem from instruction-order violations and semantic drift, indicating that current architectures internalize instruction following as a sequential pattern rather than an abstract reasoning skill. These results highlight structural sensitivity as a fundamental limitation in LLM alignment and reasoning, establishing a reproducible framework for testing discontinuous procedural execution.
Paper Type: Long
Research Area: Language Models
Research Area Keywords: chain-of-thought, prompting, robustness, reasoning
Contribution Types: Model analysis & interpretability, Publicly available software and/or pre-trained models
Languages Studied: English
Submission Number: 6571
Loading