Large-Scale Constraint Generation. Can LLMs Parse Hundreds of Constraints?

ACL ARR 2025 February Submission6059 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent research has explored the constrained generation capabilities of Large Language Models (LLMs) when explicitly prompted by few task-specific requirements. In contrast, we introduce Large-Scale Constraint Generation (LSCG), a new problem that evaluates whether LLMs can parse a large, fine-grained, generic list of constraints. To examine the LLMs’ ability to handle an increasing number constraints, we create a practical instance of LSCG, called Words Checker. In Words Checker, we evaluate the impact of model characteristics (e.g., size, family) and steering techniques (e.g., Simple Prompt, Chain of Thought, Best of N) on performance. In addition, we propose FoCusNet, a small and dedicated model that parses the original list of constraints into a smaller subset to help the LLM focus on relevant constraints. Experiments reveal that existing solutions suffer a significant performance drop as the number of constraints increases, with FoCusNet showing at least an 8-13% accuracy boost.
Paper Type: Long
Research Area: Special Theme (conference specific)
Research Area Keywords: Instruction Following, Real-World Adaptability, Support Model
Contribution Types: NLP engineering experiment, Publicly available software and/or pre-trained models, Data resources, Position papers
Languages Studied: English
Submission Number: 6059
Loading